libdb_dotnet48
A log sequence number, which specifies a unique location in a log file.
The log file number.
The offset in the log file.
Instantiate a new LSN object
The log file number.
The offset in the log file.
Compare two LSNs.
The first LSN to compare
The second LSN to compare
0 if they are equal, 1 if lsn1 is greater than lsn2, and -1 if lsn1
is less than lsn2.
A class representing a HashDatabase. The Hash format is an extensible,
dynamic hashing scheme.
A class representing a Berkeley DB database, a base class for access
method specific classes.
The base class from which all database classes inherit
Protected constructor
The environment in which to create this database
Flags to pass to the DB->create() method
Create a new database object with the same underlying DB handle as
. Used during Database.Open to get an
object of the correct DBTYPE.
Database to clone
Protected factory method to create and open a new database object.
The database's filename
The subdatabase's name
The database's configuration
The transaction in which to open the database
A new, open database object
Flush any cached database information to disk, close any open
objects, free any
allocated resources, and close any underlying files.
Although closing a database will close any open cursors, it is
recommended that applications explicitly close all their Cursor
objects before closing the database. The reason why is that when the
cursor is explicitly closed, the memory allocated for it is
reclaimed; however, this will not happen if you close a database
while cursors are still opened.
The same rule, for the same reasons, hold true for
objects. Simply make sure you resolve
all your transaction objects before closing your database handle.
Because key/data pairs are cached in memory, applications should
make a point to always either close database handles or sync their
data to disk (using before exiting, to
ensure that any data cached in main memory are reflected in the
underlying file system.
When called on a database that is the primary database for a
secondary index, the primary database should be closed only after
all secondary indices referencing it have been closed.
When multiple threads are using the object concurrently, only a
single thread may call the Close method.
The object may not be accessed again after Close is called,
regardless of its outcome.
Optionally flush any cached database information to disk, close any
open objects, free
any allocated resources, and close any underlying files.
If false, do not flush cached information to disk.
The sync parameter is a dangerous option. It should be set to false
only if the application is doing logging (with transactions) so that
the database is recoverable after a system or application crash, or
if the database is always generated from scratch after any system or
application crash.
It is important to understand that flushing cached information to
disk only minimizes the window of opportunity for corrupted data.
Although unlikely, it is possible for database corruption to happen
if a system or application crash occurs while writing data to the
database. To ensure that database corruption never occurs,
applications must either use transactions and logging with automatic
recovery or edit a copy of the database, and once all applications
using the database have successfully called Close, atomically
replace the original database with the updated copy.
Note that this parameter only works when the database has been
opened using an environment.
Create a database cursor.
A newly created cursor
Create a database cursor with the given configuration.
The configuration properties for the cursor.
A newly created cursor
Create a transactionally protected database cursor.
The transaction context in which the cursor may be used.
A newly created cursor
Create a transactionally protected database cursor with the given
configuration.
The configuration properties for the cursor.
The transaction context in which the cursor may be used.
A newly created cursor
Remove key/data pairs from the database. The key/data pair
associated with is discarded from the
database. In the presence of duplicate key values, all records
associated with the designated key will be discarded.
When called on a secondary database, remove the key/data pair from
the primary database and all secondary indices.
If the operation occurs in a transactional database, the operation
will be implicitly transaction protected.
Discard the key/data pair associated with .
A NotFoundException is thrown if is not in
the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
Remove key/data pairs from the database. The key/data pair
associated with is discarded from the
database. In the presence of duplicate key values, all records
associated with the designated key will be discarded.
When called on a secondary database, remove the key/data pair from
the primary database and all secondary indices.
If is null and the operation occurs in a
transactional database, the operation will be implicitly transaction
protected.
Discard the key/data pair associated with .
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A NotFoundException is thrown if is not in
the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
Check whether appears in the database.
If the operation occurs in a transactional database, the operation
will be implicitly transaction protected.
The key to search for.
A NotFoundException is thrown if is not in
the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
True if appears in the database, false
otherwise.
Check whether appears in the database.
If is null and the operation occurs in a
transactional database, the operation will be implicitly transaction
protected.
The key to search for.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A NotFoundException is thrown if is not in
the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
True if appears in the database, false
otherwise.
Check whether appears in the database.
If is null and the operation occurs in a
transactional database, the operation will be implicitly transaction
protected.
The key to search for.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The locking behavior to use.
A NotFoundException is thrown if is not in
the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
True if appears in the database, false
otherwise.
Retrieve a key/data pair from the database. In the presence of
duplicate key values, Get will return the first data item for
.
If the operation occurs in a transactional database, the operation
will be implicitly transaction protected.
The key to search for
A NotFoundException is thrown if is not in
the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
A whose Key
parameter is and whose Value parameter is the
retrieved data.
Retrieve a key/data pair from the database. In the presence of
duplicate key values, Get will return the first data item for
.
If is null and the operation occurs in a
transactional database, the operation will be implicitly transaction
protected.
The key to search for
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A NotFoundException is thrown if is not in
the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
A whose Key
parameter is and whose Value parameter is the
retrieved data.
Retrieve a key/data pair from the database. In the presence of
duplicate key values, Get will return the first data item for
.
If is null and the operation occurs in a
transactional database, the operation will be implicitly transaction
protected.
The key to search for
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The locking behavior to use.
A NotFoundException is thrown if is not in
the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
A whose Key
parameter is and whose Value parameter is the
retrieved data.
Protected method to retrieve data from the underlying DB handle.
The key to search for. If null a new DatabaseEntry is created.
The data to search for. If null a new DatabaseEntry is created.
The txn for this operation.
Locking info for this operation.
Flags value specifying which type of get to perform. Passed
directly to DB->get().
A whose Key
parameter is and whose Value parameter is the
retrieved data.
Retrieve a key/data pair from the database which matches
and .
If the operation occurs in a transactional database, the operation
will be implicitly transaction protected.
The key to search for
The data to search for
A NotFoundException is thrown if and
are not in the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
A whose Key
parameter is and whose Value parameter is
.
Retrieve a key/data pair from the database which matches
and .
If is null and the operation occurs in a
transactional database, the operation will be implicitly transaction
protected.
The key to search for
The data to search for
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A NotFoundException is thrown if and
are not in the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
A whose Key
parameter is and whose Value parameter is
.
Retrieve a key/data pair from the database which matches
and .
If is null and the operation occurs in a
transactional database, the operation will be implicitly transaction
protected.
The key to search for
The data to search for
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The locking behavior to use.
A NotFoundException is thrown if and
are not in the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
A whose Key
parameter is and whose Value parameter is
.
Display the database statistical information which does not require
traversal of the database.
Among other things, this method makes it possible for applications
to request key and record counts without incurring the performance
penalty of traversing the entire database.
The statistical information is described by the
, ,
, and classes.
Display the database statistical information which does not require
traversal of the database.
Among other things, this method makes it possible for applications
to request key and record counts without incurring the performance
penalty of traversing the entire database.
If true, display all available information.
Display the database statistical information.
The statistical information is described by the
, ,
, and classes.
Display the database statistical information.
If true, display all available information.
Remove the underlying file represented by
, incidentally removing all of the
databases it contained.
The file to remove
Remove the underlying file represented by
, incidentally removing all of the
databases it contained.
The file to remove
The DatabaseEnvironment the database belongs to
Remove the database specified by and
.
The file to remove
The database to remove
Remove the database specified by and
.
Applications should never remove databases with open DB handles, or
in the case of removing a file, when any database in the file has an
open handle. For example, some architectures do not permit the
removal of files with open system handles. On these architectures,
attempts to remove databases currently in use by any thread of
control in the system may fail.
Remove should not be called if the remove is intended to be
transactionally safe;
should be
used instead.
The file to remove
The database to remove
The DatabaseEnvironment the database belongs to
Rename the underlying file represented by
, incidentally renaming all of the
databases it contained.
The file to rename
The new filename
Rename the underlying file represented by
, incidentally renaming all of the
databases it contained.
The file to rename
The new filename
The DatabaseEnvironment the database belongs to
Rename the database specified by and
.
The file to rename
The database to rename
The new database name
Rename the database specified by and
.
Applications should not rename databases that are currently in use.
If an underlying file is being renamed and logging is currently
enabled in the database environment, no database in the file may be
open when Rename is called. In particular, some architectures do not
permit renaming files with open handles. On these architectures,
attempts to rename databases that are currently in use by any thread
of control in the system may fail.
Rename should not be called if the rename is intended to be
transactionally safe;
should be
used instead.
The file to rename
The database to rename
The new database name
The DatabaseEnvironment the database belongs to
Flush any cached information to disk.
If the database is in memory only, Sync has no effect and will
always succeed.
It is important to understand that flushing cached information to
disk only minimizes the window of opportunity for corrupted data.
Although unlikely, it is possible for database corruption to happen
if a system or application crash occurs while writing data to the
database. To ensure that database corruption never occurs,
applications must either: use transactions and logging with
automatic recovery or edit a copy of the database, and once all
applications using the database have successfully called
, atomically replace
the original database with the updated copy.
Empty the database, discarding all records it contains.
If the operation occurs in a transactional database, the operation
will be implicitly transaction protected.
When called on a database configured with secondary indices,
Truncate will truncate the primary database and all secondary
indices. A count of the records discarded from the primary database
is returned.
The number of records discarded from the database.
Empty the database, discarding all records it contains.
If is null and the operation occurs in a
transactional database, the operation will be implicitly transaction
protected.
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The number of records discarded from the database.
Release the resources held by this object, and close the database if
it's still open.
If true, all database modification operations based on this object
will be transactionally protected.
The size of the shared memory buffer pool -- that is, the cache.
The CreatePolicy with which this database was opened.
The name of this database, if it has one.
If true, do checksum verification of pages read into the cache from
the backing filestore.
Berkeley DB uses the SHA1 Secure Hash Algorithm if encryption is
configured and a general hash algorithm if it is not.
The algorithm used by the Berkeley DB library to perform encryption
and decryption.
If true, encrypt all data stored in the database.
The database byte order.
The mechanism for reporting detailed error messages to the
application.
When an error occurs in the Berkeley DB library, a
, or subclass of DatabaseException,
is thrown. In some cases, however, the exception may be insufficient
to completely describe the cause of the error, especially during
initial application debugging.
In some cases, when an error occurs, Berkeley DB will call the given
delegate with additional error information. It is up to the delegate
to display the error message in an appropriate manner.
Setting ErrorFeedback to NULL unconfigures the callback interface.
This error-logging enhancement does not slow performance or
significantly increase application size, and may be run during
normal operation as well as during application debugging.
For databases opened inside of a DatabaseEnvironment, setting
ErrorFeedback affects the entire environment and is equivalent to
setting DatabaseEnvironment.ErrorFeedback.
For databases not opened in an environment, setting ErrorFeedback
configures operations performed using the specified object, not all
operations performed on the underlying database.
The prefix string that appears before error messages issued by
Berkeley DB.
For databases opened inside of a DatabaseEnvironment, setting
ErrorPrefix affects the entire environment and is equivalent to
setting .
Setting ErrorPrefix configures operations performed using the
specified object, not all operations performed on the underlying
database.
Monitor progress within long running operations.
Some operations performed by the Berkeley DB library can take
non-trivial amounts of time. The Feedback delegate can be used by
applications to monitor progress within these operations. When an
operation is likely to take a long time, Berkeley DB will call the
specified delegate with progress information.
It is up to the delegate to display this information in an
appropriate manner.
The filename of this database, if it has one.
If true, the object is free-threaded; that is, concurrently usable
by multiple threads in the address space.
If true, the object references a physical file supporting multiple
databases.
If true, the object is a handle on a database whose key values are
the names of the databases stored in the physical file and whose
data values are opaque objects. No keys or data values may be
modified or stored using the database handle.
If true, the underlying database files were created on an
architecture of the same byte order as the current one. This
information may be used to determine whether application data needs
to be adjusted for this architecture or not.
If true, this database is not mapped into process memory.
See for further
information.
If true, Berkeley DB will not write log records for this database.
The database's current page size.
If was not set by
your application, then the default pagesize is selected based on the
underlying filesystem I/O block size.
The cache priority for pages referenced by this object.
If true, this database has been opened for reading only. Any attempt
to modify items in the database will fail, regardless of the actual
permissions of any underlying files.
If true, this database supports transactional read operations with
degree 1 isolation. Read operations on the database may request the
return of modified but not yet committed data.
If true, this database has been opened in a transactional mode.
If true, the underlying file was physically truncated upon open,
discarding all previous databases it might have held.
The type of the underlying access method (and file format). This
value may be used to determine the type of the database after an
.
If true, the database was opened with support for multiversion
concurrency control.
Protected constructor
The environment in which to create this database
Flags to pass to the DB->create() method
Create a new database object with the same underlying DB handle as
. Used during Database.Open to get an
object of the correct DBTYPE.
Database to clone
Instantiate a new Database object and open the database represented
by . The file specified by
must exist.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database.
The database's configuration
A new, open database object
Instantiate a new Database object and open the database represented
by and .
The file specified by must exist.
If is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
A new, open database object
Instantiate a new Database object and open the database represented
by . The file specified by
must exist.
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
Instantiate a new Database object and open the database represented
by and .
The file specified by must exist.
If both and
are null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe. If
is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
If a key/data pair in the database matches
and , return the key and all duplicate data
items.
The key to search for
The data to search for
A NotFoundException is thrown if and
are not in the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
A
whose Key parameter is and whose Value
parameter is the retrieved data items.
If a key/data pair in the database matches
and , return the key and all duplicate data
items.
The key to search for
The data to search for
The initial size of the buffer to fill with duplicate data items. If
the buffer is not large enough, it will be automatically resized.
A NotFoundException is thrown if and
are not in the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
A
whose Key parameter is and whose Value
parameter is the retrieved data items.
If a key/data pair in the database matches
and , return the key and all duplicate data
items.
The key to search for
The data to search for
The initial size of the buffer to fill with duplicate data items. If
the buffer is not large enough, it will be automatically resized.
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A NotFoundException is thrown if and
are not in the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
A
whose Key parameter is and whose Value
parameter is the retrieved data items.
If a key/data pair in the database matches
and , return the key and all duplicate data
items.
The key to search for
The data to search for
The initial size of the buffer to fill with duplicate data items. If
the buffer is not large enough, it will be automatically resized.
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The locking behavior to use.
A NotFoundException is thrown if and
are not in the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
A
whose Key parameter is and whose Value
parameter is the retrieved data items.
Retrieve a key and all duplicate data items from the database.
The key to search for
A NotFoundException is thrown if is not in
the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
A
whose Key parameter is and whose Value
parameter is the retrieved data items.
Retrieve a key and all duplicate data items from the database.
The key to search for
The initial size of the buffer to fill with duplicate data items. If
the buffer is not large enough, it will be automatically resized.
A NotFoundException is thrown if is not in
the database.
A KeyEmptyException is thrown if the database is a
or
database and exists, but was never explicitly
created by the application or was later deleted.
A
whose Key parameter is and whose Value
parameter is the retrieved data items.
Retrieve a key and all duplicate data items from the database.
The key to search for
The initial size of the buffer to fill with duplicate data items. If
the buffer is not large enough, it will be automatically resized.
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A
whose Key parameter is and whose Value
parameter is the retrieved data items.
Retrieve a key and all duplicate data items from the database.
The key to search for
The initial size of the buffer to fill with duplicate data items. If
the buffer is not large enough, it will be automatically resized.
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The locking behavior to use.
A
whose Key parameter is and whose Value
parameter is the retrieved data items.
Create a specialized join cursor for use in performing equality or
natural joins on secondary indices.
Once the cursors have been passed as part of ,
they should not be accessed or modified until the newly created
has been closed, or else inconsistent
results may be returned.
Joined values are retrieved by doing a sequential iteration over the
first cursor in , and a nested iteration over
each secondary cursor in the order they are specified in the
curslist parameter. This requires database traversals to search for
the current datum in all the cursors after the first. For this
reason, the best join performance normally results from sorting the
cursors from the one that refers to the least number of data items
to the one that refers to the most.
An array of SecondaryCursors. Each cursor must have been initialized
to refer to the key on which the underlying database should be
joined.
If true, sort the cursors from the one that refers to the least
number of data items to the one that refers to the most. If the
data are structured so that cursors with many data items also share
many common elements, higher performance will result from listing
those cursors before cursors with fewer data items; that is, a sort
order other than the default. A setting of false permits
applications to perform join optimization prior to calling Join.
A specialized join cursor for use in performing equality or natural
joins on secondary indices.
Store the key/data pair in the database, replacing any previously
existing key if duplicates are disallowed, or adding a duplicate
data item if duplicates are allowed.
If the database supports duplicates, add the new data value at the
end of the duplicate set. If the database supports sorted
duplicates, the new data value is inserted at the correct sorted
location.
The key to store in the database
The data item to store in the database
Store the key/data pair in the database, replacing any previously
existing key if duplicates are disallowed, or adding a duplicate
data item if duplicates are allowed.
The key to store in the database
The data item to store in the database
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
Store the key/data pair in the database, only if the key does not
already appear in the database.
This enforcement of uniqueness of keys applies only to the primary
key, the behavior of insertions into secondary databases is not
affected. In particular, the insertion of a record that would result
in the creation of a duplicate key in a secondary database that
allows duplicates would not be prevented by the use of this flag.
The key to store in the database
The data item to store in the database
Store the key/data pair in the database, only if the key does not
already appear in the database.
This enforcement of uniqueness of keys applies only to the primary
key, the behavior of insertions into secondary databases is not
affected. In particular, the insertion of a record that would result
in the creation of a duplicate key in a secondary database that
allows duplicates would not be prevented by the use of this flag.
The key to store in the database
The data item to store in the database
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
Protected wrapper for DB->put. Used by subclasses for access method
specific operations.
The key to store in the database
The data item to store in the database
Transaction with which to protect the put
Flags to pass to DB->put
Write the key/data pairs from all databases in the file to
. Key values are written for Btree, Hash
and Queue databases, but not for Recno databases.
The physical file in which the databases to be salvaged are found.
Configuration parameters for the databases to be salvaged.
Write the key/data pairs from all databases in the file to
. Key values are written for Btree, Hash
and Queue databases, but not for Recno databases.
The physical file in which the databases to be salvaged are found.
Configuration parameters for the databases to be salvaged.
If true and characters in either the key or data items are printing
characters (as defined by isprint(3)), use printing characters to
represent them. This setting permits users to use standard text
editors and tools to modify the contents of databases or selectively
remove data from salvager output.
Write the key/data pairs from all databases in the file to
. Key values are written for Btree,
Hash and Queue databases, but not for Recno databases.
The physical file in which the databases to be salvaged are found.
Configuration parameters for the databases to be salvaged.
The TextWriter to which the databases' key/data pairs are written.
If null, will be used.
Write the key/data pairs from all databases in the file to
. Key values are written for Btree,
Hash and Queue databases, but not for Recno databases.
The physical file in which the databases to be salvaged are found.
Configuration parameters for the databases to be salvaged.
If true and characters in either the key or data items are printing
characters (as defined by isprint(3)), use printing characters to
represent them. This setting permits users to use standard text
editors and tools to modify the contents of databases or selectively
remove data from salvager output.
The TextWriter to which the databases' key/data pairs are written.
If null, will be used.
Write the key/data pairs from all databases in the file to
. Key values are written for Btree, Hash
and Queue databases, but not for Recno databases.
The physical file in which the databases to be salvaged are found.
Configuration parameters for the databases to be salvaged.
If true and characters in either the key or data items are printing
characters (as defined by isprint(3)), use printing characters to
represent them. This setting permits users to use standard text
editors and tools to modify the contents of databases or selectively
remove data from salvager output.
If true, output all the key/data pairs in the file that can be
found. Corruption will be assumed and key/data pairs that are
corrupted or have been deleted may appear in the output (even if the
file being salvaged is in no way corrupt), and the output will
almost certainly require editing before being loaded into a
database.
Write the key/data pairs from all databases in the file to
. Key values are written for Btree,
Hash and Queue databases, but not for Recno databases.
The physical file in which the databases to be salvaged are found.
Configuration parameters for the databases to be salvaged.
If true and characters in either the key or data items are printing
characters (as defined by isprint(3)), use printing characters to
represent them. This setting permits users to use standard text
editors and tools to modify the contents of databases or selectively
remove data from salvager output.
If true, output all the key/data pairs in the file that can be
found. Corruption will be assumed and key/data pairs that are
corrupted or have been deleted may appear in the output (even if the
file being salvaged is in no way corrupt), and the output will
almost certainly require editing before being loaded into a
database.
The TextWriter to which the databases' key/data pairs are written.
If null, will be used.
Upgrade all of the databases included in the file
, if necessary. If no upgrade is necessary,
Upgrade always returns successfully.
The physical file containing the databases to be upgraded.
Configuration parameters for the databases to be upgraded.
Upgrade all of the databases included in the file
, if necessary. If no upgrade is necessary,
Upgrade always returns successfully.
Database upgrades are done in place and are destructive. For
example, if pages need to be allocated and no disk space is
available, the database may be left corrupted. Backups should be
made before databases are upgraded. See Upgrading databases in the
Programmer's Reference Guide for more information.
As part of the upgrade from the Berkeley DB 3.0 release to the 3.1
release, the on-disk format of duplicate data items changed. To
correctly upgrade the format requires applications to specify
whether duplicate data items in the database are sorted or not.
Specifying informs Upgrade that
the duplicates are sorted; otherwise they are assumed to be
unsorted. Incorrectly specifying the value of this flag may lead to
database corruption.
Further, because this method upgrades a physical file (including all
the databases it contains), it is not possible to use Upgrade to
upgrade files in which some of the databases it includes have sorted
duplicate data items, and some of the databases it includes have
unsorted duplicate data items. If the file does not have more than a
single database, if the databases do not support duplicate data
items, or if all of the databases that support duplicate data items
support the same style of duplicates (either sorted or unsorted),
Upgrade will work correctly as long as
is correctly specified.
Otherwise, the file cannot be upgraded using Upgrade it must be
upgraded manually by dumping and reloading the databases.
The physical file containing the databases to be upgraded.
Configuration parameters for the databases to be upgraded.
If true, the duplicates in the upgraded database are sorted;
otherwise they are assumed to be unsorted. This setting is only
meaningful when upgrading databases from releases before the
Berkeley DB 3.1 release.
Verify the integrity of all databases in the file specified by
.
Verify does not perform any locking, even in Berkeley DB
environments that are configured with a locking subsystem. As such,
it should only be used on files that are not being modified by
another thread of control.
The physical file in which the databases to be verified are found.
Configuration parameters for the databases to be verified.
Verify the integrity of all databases in the file specified by
.
Berkeley DB normally verifies that btree keys and duplicate items
are correctly sorted, and hash keys are correctly hashed. If the
file being verified contains multiple databases using differing
sorting or hashing algorithms, some of them must necessarily fail
database verification because only one sort order or hash function
can be specified in . To verify files with
multiple databases having differing sorting orders or hashing
functions, first perform verification of the file as a whole by
using , and then
individually verify the sort order and hashing function for each
database in the file using
.
The physical file in which the databases to be verified are found.
Configuration parameters for the databases to be verified.
The extent of verification
Verify the integrity of the database specified by
and .
Berkeley DB normally verifies that btree keys and duplicate items
are correctly sorted, and hash keys are correctly hashed. If the
file being verified contains multiple databases using differing
sorting or hashing algorithms, some of them must necessarily fail
database verification because only one sort order or hash function
can be specified in . To verify files with
multiple databases having differing sorting orders or hashing
functions, first perform verification of the file as a whole by
using , and then
individually verify the sort order and hashing function for each
database in the file using
.
The physical file in which the databases to be verified are found.
The database in on which the database checks
for btree and duplicate sort order and for hashing are to be
performed. A non-null value for database is only allowed with
.
Configuration parameters for the databases to be verified.
The extent of verification
Specifies the type of verification to perform
Perform database checks and check sort order
Perform the database checks for btree and duplicate sort order
and for hashing
Skip the database checks for btree and duplicate sort order and
for hashing.
Instantiate a new HashDatabase object and open the database
represented by .
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
A new, open database object
Instantiate a new HashDatabase object and open the database
represented by and
.
If both and
are null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe. If
is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
A new, open database object
Instantiate a new HashDatabase object and open the database
represented by .
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
Instantiate a new HashDatabase object and open the database
represented by and
.
If both and
are null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe. If
is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
Create a database cursor.
A newly created cursor
Create a database cursor with the given configuration.
The configuration properties for the cursor.
A newly created cursor
Create a transactionally protected database cursor.
The transaction context in which the cursor may be used.
A newly created cursor
Create a transactionally protected database cursor with the given
configuration.
The configuration properties for the cursor.
The transaction context in which the cursor may be used.
A newly created cursor
Return the database statistical information which does not require
traversal of the database.
The database statistical information which does not require
traversal of the database.
Return the database statistical information which does not require
traversal of the database.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The database statistical information which does not require
traversal of the database.
Return the database statistical information which does not require
traversal of the database.
Among other things, this method makes it possible for applications
to request key and record counts without incurring the performance
penalty of traversing the entire database.
The statistical information is described by the
, ,
, and classes.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The level of isolation for database reads.
will be silently ignored for
databases which did not specify
.
The database statistical information which does not require
traversal of the database.
Return pages to the filesystem that are already free and at the end
of the file.
The number of database pages returned to the filesystem
Return pages to the filesystem that are already free and at the end
of the file.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The number of database pages returned to the filesystem
Store the key/data pair in the database only if it does not already
appear in the database.
The key to store in the database
The data item to store in the database
Store the key/data pair in the database only if it does not already
appear in the database.
The key to store in the database
The data item to store in the database
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
Return the database statistical information for this database.
Database statistical information.
Return the database statistical information for this database.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
Database statistical information.
Return the database statistical information for this database.
The statistical information is described by
.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The level of isolation for database reads.
will be silently ignored for
databases which did not specify
.
Database statistical information.
The Hash key comparison function. The comparison function is called
whenever it is necessary to compare a key specified by the
application with a key currently stored in the tree.
The duplicate data item comparison function.
Whether the insertion of duplicate data items in the database is
permitted, and whether duplicates items are sorted.
The desired density within the hash table.
A user-defined hash function; if no hash function is specified, a
default hash function is used.
An estimate of the final size of the hash table.
A class representing a Berkeley DB database environment - a collection
including support for some or all of caching, locking, logging and
transaction subsystems, as well as databases and log files.
Set the clock skew ratio among replication group members based on
the fastest and slowest measurements among the group for use with
master leases.
Calling this method is optional, the default values for clock skew
assume no skew. The user must also configure leases via
. Additionally, the user must also
set the master lease timeout via and
the number of sites in the replication group via
. These settings may be configured in any
order. For a description of the clock skew values, see Clock skew
in the Berkeley DB Programmer's Reference Guide. For a description
of master leases, see Master leases in the Berkeley DB Programmer's
Reference Guide.
These arguments can be used to express either raw measurements of a
clock timing experiment or a percentage across machines. For
instance a group of sites have a 2% variance, then
should be set to 102, and
should be set to 100. Or, for a 0.03%
difference, you can use 10003 and 10000 respectively.
The value, relative to , of the fastest clock
in the group of sites.
The value of the slowest clock in the group of sites.
Set a threshold for the minimum and maximum time that a client waits
before requesting retransmission of a missing message.
If the client detects a gap in the sequence of incoming log records
or database pages, Berkeley DB will wait for at least
microseconds before requesting retransmission
of the missing record. Berkeley DB will double that amount before
requesting the same missing record again, and so on, up to a
maximum threshold of microseconds.
These values are thresholds only. Since Berkeley DB has no thread
available in the library as a timer, the threshold is only checked
when a thread enters the Berkeley DB library to process an incoming
replication message. Any amount of time may have passed since the
last message arrived and Berkeley DB only checks whether the amount
of time since a request was made is beyond the threshold value or
not.
By default the minimum is 40000 and the maximum is 1280000 (1.28
seconds). These defaults are fairly arbitrary and the application
likely needs to adjust these. The values should be based on expected
load and performance characteristics of the master and client host
platforms and transport infrastructure as well as round-trip message
time.
The minimum number of microseconds a client waits before requesting
retransmission.
The maximum number of microseconds a client waits before requesting
retransmission.
Set a byte-count limit on the amount of data that will be
transmitted from a site in response to a single message processed by
. The limit is not a hard limit, and
the record that exceeds the limit is the last record to be sent.
Record transmission throttling is turned on by default with a limit
of 10MB.
If both and are
zero, then the transmission limit is turned off.
The number of gigabytes which, when added to
, specifies the maximum number of bytes that
will be sent in a single call to .
The number of bytes which, when added to
, specifies the maximum number of bytes
that will be sent in a single call to
.
Initialize the communication infrastructure for a database
environment participating in a replicated application.
RepSetTransport is not called by most replication applications. It
should only be called by applications implementing their own network
transport layer, explicitly holding replication group elections and
handling replication messages outside of the replication manager
framework.
The local environment's ID. It must be a non-negative integer and
uniquely identify this Berkeley DB database environment (see
Replication environment IDs in the Programmer's Reference Guide for
more information).
The delegate used to transmit data using the replication
application's communication infrastructure.
Instantiate a new DatabaseEnvironment object and open the Berkeley
DB environment represented by .
The database environment's home directory. For more information on
home, and filename resolution in general, see Berkeley DB File
Naming in the Programmer's Reference Guide.
The environment's configuration
A new, open DatabaseEnvironment object
Destroy a Berkeley DB environment if it is not currently in use.
The environment regions, including any backing files, are removed.
Any log or database files and the environment directory are not
removed.
If there are processes that have called without
calling (that is, there are processes currently
using the environment), Remove will fail without further action.
Calling Remove should not be necessary for most applications because
the Berkeley DB environment is cleaned up as part of normal database
recovery procedures. However, applications may want to call Remove
as part of application shut down to free up system resources. For
example, if was
specified to , it may be useful to call Remove in
order to release system shared memory segments that have been
allocated. Or, on architectures in which mutexes require allocation
of underlying system resources, it may be useful to call Remove in
order to release those resources. Alternatively, if recovery is not
required because no database state is maintained across failures,
and no system resources need to be released, it is possible to clean
up an environment by simply removing all the Berkeley DB files in
the database environment's directories.
In multithreaded applications, only a single thread may call Remove.
The database environment to be removed.
Destroy a Berkeley DB environment if it is not currently in use.
Generally, is specified only when
applications were unable to shut down cleanly, and there is a risk
that an application may have died holding a Berkeley DB lock.)
The result of attempting to forcibly destroy the environment when it
is in use is unspecified. Processes using an environment often
maintain open file descriptors for shared regions within it. On UNIX
systems, the environment removal will usually succeed, and processes
that have already joined the region will continue to run in that
region without change. However, processes attempting to join the
environment will either fail or create new regions. On other systems
in which the unlink(2) system call will fail if any process has an
open file descriptor for the file (for example Windows/NT), the
region removal will fail.
The database environment to be removed.
If true, the environment is removed, regardless of any processes
that may still using it, and no locks are acquired during this
process.
Hold an election for the master of a replication group.
Hold an election for the master of a replication group.
The number of replication sites expected to participate in the
election. Once the current site has election information from that
many sites, it will short-circuit the election and immediately cast
its vote for a new master. This parameter must be no less than
, or 0 if the election should use
. If an application is using master leases,
then the value must be 0 and must be used.
Hold an election for the master of a replication group.
RepHoldElection is not called by most replication applications. It
should only be called by applications implementing their own network
transport layer, explicitly holding replication group elections and
handling replication messages outside of the replication manager
framework.
If the election is successful, Berkeley DB will notify the
application of the results of the election by means of either the
or
events (see
for more information). The application is
responsible for adjusting its relationship to the other database
environments in the replication group, including directing all
database updates to the newly selected master, in accordance with
the results of the election.
The thread of control that calls RepHoldElection must not be the
thread of control that processes incoming messages; processing the
incoming messages is necessary to successfully complete an election.
Before calling this method, the delegate
must already have been configured to send replication messages.
The number of replication sites expected to participate in the
election. Once the current site has election information from that
many sites, it will short-circuit the election and immediately cast
its vote for a new master. This parameter must be no less than
, or 0 if the election should use
. If an application is using master leases,
then the value must be 0 and must be used.
The minimum number of replication sites from which the current site
must have election information, before the current site will cast a
vote for a new master. This parameter must be no greater than
, or 0 if the election should use the value
(( / 2) + 1).
Add a new replication site to the replication manager's list of
known sites. It is not necessary for all sites in a replication
group to know about all other sites in the group.
The remote site's address
The environment ID assigned to the remote site
Add a new replication site to the replication manager's list of
known sites. It is not necessary for all sites in a replication
group to know about all other sites in the group.
Currently, the replication manager framework only supports a single
client peer, and the last specified peer is used.
The remote site's address
If true, configure client-to-client synchronization with the
specified remote site.
The environment ID assigned to the remote site
Start the replication manager as a client site, and do not call for
an election.
There are two ways to build Berkeley DB replication applications:
the most common approach is to use the Berkeley DB library
"replication manager" support, where the Berkeley DB library manages
the replication group, including network transport, all replication
message processing and acknowledgment, and group elections.
Applications using the replication manager support generally make
the following calls:
-
Configure the local site in the replication group,
.
-
Call to configure the remote
site(s) in the replication group.
- Configure the message acknowledgment policy
() which provides the replication group's
transactional needs.
-
Configure the local site's election priority,
.
-
Call or
to start the replication
application.
For more information on building replication manager applications,
please see the Replication Getting Started Guide included in the
Berkeley DB documentation.
Applications with special needs (for example, applications using
network protocols not supported by the Berkeley DB replication
manager), must perform additional configuration and call other
Berkeley DB replication methods. For more information on building
advanced replication applications, please see the Base Replication
API section in the Berkeley DB Programmer's Reference Guide for more
information.
Starting the replication manager consists of opening the TCP/IP
listening socket to accept incoming connections, and starting all
necessary background threads. When multiple processes share a
database environment, only one process can open the listening
socket; (and
) automatically open the socket in
the first process to call it, and skips this step in the later calls
from other processes.
Specify the number of threads of control created and dedicated to
processing replication messages. In addition to these message
processing threads, the replication manager creates and manages a
few of its own threads of control.
Start the replication manager as a client site, and optionally call
for an election.
Specify the number of threads of control created and dedicated to
processing replication messages. In addition to these message
processing threads, the replication manager creates and manages a
few of its own threads of control.
If true, start as a client, and call for an election if no master is
found.
Start the replication manager as a master site, and do not call for
an election.
There are two ways to build Berkeley DB replication applications:
the most common approach is to use the Berkeley DB library
"replication manager" support, where the Berkeley DB library manages
the replication group, including network transport, all replication
message processing and acknowledgment, and group elections.
Applications using the replication manager support generally make
the following calls:
-
Configure the local site in the replication group,
.
-
Call to configure the remote
site(s) in the replication group.
- Configure the message acknowledgment policy
() which provides the replication group's
transactional needs.
-
Configure the local site's election priority,
.
-
Call or
to start the replication
application.
For more information on building replication manager applications,
please see the Replication Getting Started Guide included in the
Berkeley DB documentation.
Applications with special needs (for example, applications using
network protocols not supported by the Berkeley DB replication
manager), must perform additional configuration and call other
Berkeley DB replication methods. For more information on building
advanced replication applications, please see the Base Replication
API section in the Berkeley DB Programmer's Reference Guide for more
information.
Starting the replication manager consists of opening the TCP/IP
listening socket to accept incoming connections, and starting all
necessary background threads. When multiple processes share a
database environment, only one process can open the listening
socket; (and
) automatically open the socket in
the first process to call it, and skips this step in the later calls
from other processes.
Specify the number of threads of control created and dedicated to
processing replication messages. In addition to these message
processing threads, the replication manager creates and manages a
few of its own threads of control.
Process an incoming replication message sent by a member of the
replication group to the local database environment.
RepProcessMessage is not called by most replication applications. It
should only be called by applications implementing their own network
transport layer, explicitly holding replication group elections and
handling replication messages outside of the replication manager
framework.
For implementation reasons, all incoming replication messages must
be processed using the same
object. It is not required that a single thread of control process
all messages, only that all threads of control processing messages
use the same object.
Before calling this method, the delegate
must already have been configured to send replication messages.
A copy of the control parameter specified by Berkeley DB on the
sending environment.
A copy of the rec parameter specified by Berkeley DB on the sending
environment.
The local identifier that corresponds to the environment that sent
the message to be processed (see Replication environment IDs in the
Programmer's Reference Guide for more information)..
The result of processing a message
Configure the database environment as a client in a group of
replicated database environments.
Configure the database environment as a client in a group of
replicated database environments.
RepStartClient is not called by most replication applications. It
should only be called by applications implementing their own network
transport layer, explicitly holding replication group elections and
handling replication messages outside of the replication manager
framework.
Replication master environments are the only database environments
where replicated databases may be modified. Replication client
environments are read-only as long as they are clients. Replication
client environments may be upgraded to be replication master
environments in the case that the current master fails or there is
no master present. If master leases are in use, this method cannot
be used to appoint a master, and should only be used to configure a
database environment as a master as the result of an election.
Before calling this method, the delegate
must already have been configured to send replication messages.
An opaque data item that is sent over the communication
infrastructure when the client comes online (see Connecting to a new
site in the Programmer's Reference Guide for more information). If
no such information is useful, cdata should be null.
Configure the database environment as a master in a group of
replicated database environments.
Configure the database environment as a master in a group of
replicated database environments.
RepStartMaster is not called by most replication applications. It
should only be called by applications implementing their own network
transport layer, explicitly holding replication group elections and
handling replication messages outside of the replication manager
framework.
Replication master environments are the only database environments
where replicated databases may be modified. Replication client
environments are read-only as long as they are clients. Replication
client environments may be upgraded to be replication master
environments in the case that the current master fails or there is
no master present. If master leases are in use, this method cannot
be used to appoint a master, and should only be used to configure a
database environment as a master as the result of an election.
Before calling this method, the delegate
must already have been configured to send replication messages.
An opaque data item that is sent over the communication
infrastructure when the client comes online (see Connecting to a new
site in the Programmer's Reference Guide for more information). If
no such information is useful, cdata should be null.
Force master synchronization to begin for this client.
This method is the other half of setting
.
If an application has configured delayed master synchronization, the
application must synchronize explicitly (otherwise the client will
remain out-of-date and will ignore all database changes forwarded
from the replication group master). RepSync may be called any time
after the client application learns that the new master has been
established (by receiving
).
Before calling this method, the delegate
must already have been configured to send replication messages.
The names of all of the log files that are no longer in use (for
example, that are no longer involved in active transactions), and
that may safely be archived for catastrophic recovery and then
removed from the system.
The Berkeley DB interfaces to the database environment logging
subsystem (for example, ) may
allocate log cursors and have open file descriptors for log files
as well. On operating systems where filesystem related system calls
(for example, rename and unlink on Windows/NT) can fail if a process
has an open file descriptor for the affected file, attempting to
move or remove the log files listed by ArchivableLogFiles may fail.
All Berkeley DB internal use of log cursors operates on active log
files only and furthermore, is short-lived in nature. So, an
application seeing such a failure should be restructured to retry
the operation until it succeeds. (Although this is not likely to be
necessary; it is hard to imagine a reason to move or rename a log
file in which transactions are being logged or aborted.)
See the db_archive utility for more information on database archival
procedures.
If true, all pathnames are returned as absolute pathnames, instead
of relative to the database home directory.
The names of all of the log files that are no longer in use
The database files that need to be archived in order to recover the
database from catastrophic failure. If any of the database files
have not been accessed during the lifetime of the current log files,
they will not included in this list. It is also possible that some
of the files referred to by the log have since been deleted from the
system.
See the db_archive utility for more information on database archival
procedures.
If true, all pathnames are returned as absolute pathnames, instead
of relative to the database home directory.
The database files that need to be archived in order to recover the
database from catastrophic failure.
The names of all of the log files
The Berkeley DB interfaces to the database environment logging
subsystem (for example, ) may
allocate log cursors and have open file descriptors for log files
as well. On operating systems where filesystem related system calls
(for example, rename and unlink on Windows/NT) can fail if a process
has an open file descriptor for the affected file, attempting to
move or remove the log files listed by LogFiles may fail. All
Berkeley DB internal use of log cursors operates on active log files
only and furthermore, is short-lived in nature. So, an application
seeing such a failure should be restructured to retry the operation
until it succeeds. (Although this is not likely to be necessary; it
is hard to imagine a reason to move or rename a log file in which
transactions are being logged or aborted.)
See the db_archive utility for more information on database archival
procedures.
If true, all pathnames are returned as absolute pathnames, instead
of relative to the database home directory.
All the log filenames, regardless of whether or not they are in use.
Remove log files that are no longer needed. Automatic log file
removal is likely to make catastrophic recovery impossible.
Allocate a locker ID in an environment configured for Berkeley DB
Concurrent Data Store applications.
Calling will discard the allocated
locker ID.
See Berkeley DB Concurrent Data Store applications in the
Programmer's Reference Guide for more information about when this is
required.
A Transaction object that uniquely identifies the locker ID
Create a new transaction in the environment, with the default
configuration.
A new transaction object
Create a new transaction in the environment.
The configuration properties for the transaction
A new transaction object
Create a new transaction in the environment.
In the presence of distributed transactions and two-phase commit,
only the parental transaction, that is a transaction without a
parent specified, should be passed as an parameter to
.
The configuration properties for the transaction
If the non-null, the new transaction will be a nested transaction,
with as the new transaction's parent.
Transactions may be nested to any level.
A new transaction object
Flush the underlying memory pool, write a checkpoint record to the
log, and then flush the log, even if there has been no activity
since the last checkpoint.
If there has been any logging activity in the database environment
since the last checkpoint, flush the underlying memory pool, write a
checkpoint record to the log, and then flush the log.
A checkpoint will be done if more than kbytesWritten kilobytes of
log data have been written since the last checkpoint.
A checkpoint will be done if more than minutesElapsed minutes have
passed since the last checkpoint.
Close the Berkeley DB environment, freeing any allocated resources
and closing any underlying subsystems.
The object should not be closed while any other handle that refers
to it is not yet closed; for example, database environment handles
must not be closed while database objects remain open, or
transactions in the environment have not yet been committed or
aborted.
Where the environment was configured with
, calling Close
aborts any unresolved transactions. Applications should not depend
on this behavior for transactions involving Berkeley DB databases;
all such transactions should be explicitly resolved. The problem
with depending on this semantic is that aborting an unresolved
transaction involving database operations requires a database
handle. Because the database handles should have been closed before
calling Close, it will not be possible to abort the transaction, and
recovery will have to be run on the Berkeley DB environment before
further operations are done.
In multithreaded applications, only a single thread may call Close.
Run one iteration of the deadlock detector. The deadlock detector
traverses the lock table and marks one of the participating lock
requesters for rejection in each deadlock it finds.
Specify which lock request(s) to reject
The number of lock requests that were rejected.
Check for threads of control (either a true thread or a process)
that have exited while manipulating Berkeley DB library data
structures, while holding a logical database lock, or with an
unresolved transaction (that is, a transaction that was never
aborted or committed).
For more information, see Architecting Data Store and Concurrent
Data Store applications, and Architecting Transactional Data Store
applications, both in the Berkeley DB Programmer's Reference Guide.
FailCheck is based on the and
delegates. Applications calling
FailCheck must have already set , and
must have configured .
If FailCheck determines a thread of control exited while holding
database read locks, it will release those locks. If FailCheck
determines a thread of control exited with an unresolved
transaction, the transaction will be aborted. In either of these
cases, FailCheck will return successfully and the application may
continue to use the database environment.
In either of these cases, FailCheck will also report the process and
thread IDs associated with any released locks or aborted
transactions. The information is printed to a specified output
channel (see for more information), or
passed to an application delegate (see for
more information).
If FailCheck determines a thread of control has exited such that
database environment recovery is required, it will throw
. In this case, the application
should not continue to use the database environment. For a further
description as to the actions the application should take when this
failure occurs, see Handling failure in Data Store and Concurrent
Data Store applications, and Handling failure in Transactional Data
Store applications, both in the Berkeley DB Programmer's Reference
Guide.
Map an LSN object to a log filename
The DB_LSN structure for which a filename is wanted.
The name of the file containing the record named by
.
Write all log records to disk.
Write log records to disk.
All log records with LSN values less than or equal to
are written to disk. If null, all
records in the log are flushed.
Append a record to the log
The record to write to the log.
If true, the log is forced to disk after this record is written,
guaranteeing that all records with LSN values less than or equal to
the one being "put" are on disk before LogWrite returns.
The LSN of the written record
Set the panic state for the database environment. (Database
environments in a panic state normally refuse all attempts to call
Berkeley DB functions, throwing .)
Restore transactions that were prepared, but not yet resolved at the
time of the system shut down or crash, to their state prior to the
shut down or crash, including any locks previously held.
Calls to Recover from different threads of control should not be
intermixed in the same environment.
The maximum number of objects
to return.
If true, continue returning a list of prepared, but not yet resolved
transactions, starting where the last call to Recover left off. If
false, begins a new pass over all prepared, but not yet completed
transactions, regardless of whether they have already been returned
in previous calls to Recover.
A list of the prepared transactions
Remove the underlying file represented by ,
incidentally removing all of the databases it contained.
The physical file to be removed.
If true, enclose RemoveDB within a transaction. If the call
succeeds, changes made by the operation will be recoverable. If the
call fails, the operation will have made no changes.
Remove the database specified by and
. If no database is specified, the
underlying file represented by is removed,
incidentally removing all of the databases it contained.
The physical file which contains the database(s) to be removed.
The database to be removed.
If true, enclose RemoveDB within a transaction. If the call
succeeds, changes made by the operation will be recoverable. If the
call fails, the operation will have made no changes.
Remove the underlying file represented by ,
incidentally removing all of the databases it contained.
The physical file to be removed.
If true, enclose RemoveDB within a transaction. If the call
succeeds, changes made by the operation will be recoverable. If the
call fails, the operation will have made no changes.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null. If
null, but or
is true, the operation will be implicitly transaction protected.
Remove the database specified by and
. If no database is specified, the
underlying file represented by is removed,
incidentally removing all of the databases it contained.
The physical file which contains the database(s) to be removed.
The database to be removed.
If true, enclose RemoveDB within a transaction. If the call
succeeds, changes made by the operation will be recoverable. If the
call fails, the operation will have made no changes.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null. If
null, but or
is true, the operation will be implicitly transaction protected.
Rename the underlying file represented by
using the value supplied to , incidentally
renaming all of the databases it contained.
The physical file to be renamed.
The new name of the database or file.
If true, enclose RenameDB within a transaction. If the call
succeeds, changes made by the operation will be recoverable. If the
call fails, the operation will have made no changes.
Rename the database specified by and
to . If no
database is specified, the underlying file represented by
is renamed using the value supplied to
, incidentally renaming all of the
databases it contained.
The physical file which contains the database(s) to be renamed.
The database to be renamed.
The new name of the database or file.
If true, enclose RenameDB within a transaction. If the call
succeeds, changes made by the operation will be recoverable. If the
call fails, the operation will have made no changes.
Rename the underlying file represented by
using the value supplied to , incidentally
renaming all of the databases it contained.
The physical file to be renamed.
The new name of the database or file.
If true, enclose RenameDB within a transaction. If the call
succeeds, changes made by the operation will be recoverable. If the
call fails, the operation will have made no changes.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null. If
null, but or
is true, the operation will be implicitly transaction protected.
Rename the database specified by and
to . If no
database is specified, the underlying file represented by
is renamed using the value supplied to
, incidentally renaming all of the
databases it contained.
The physical file which contains the database(s) to be renamed.
The database to be renamed.
The new name of the database or file.
If true, enclose RenameDB within a transaction. If the call
succeeds, changes made by the operation will be recoverable. If the
call fails, the operation will have made no changes.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null. If
null, but or
is true, the operation will be implicitly transaction protected.
Allow database files to be copied, and then the copy used in the
same database environment as the original.
All databases contain an ID string used to identify the database in
the database environment cache. If a physical database file is
copied, and used in the same environment as another file with the
same ID strings, corruption can occur. ResetFileID creates new ID
strings for all of the databases in the physical file.
ResetFileID modifies the physical file, in-place. Applications
should not reset IDs in files that are currently in use.
The name of the physical file in which new file IDs are to be created.
If true, he file contains encrypted databases.
Allow database files to be moved from one transactional database
environment to another.
Database pages in transactional database environments contain
references to the environment's log files (that is, log sequence
numbers, or s). Copying or moving a database file
from one database environment to another, and then modifying it, can
result in data corruption if the LSNs are not first cleared.
Note that LSNs should be reset before moving or copying the database
file into a new database environment, rather than moving or copying
the database file and then resetting the LSNs. Berkeley DB has
consistency checks that may be triggered if an application calls
ResetLSN on a database in a new environment when the database LSNs
still reflect the old environment.
The ResetLSN method modifies the physical file, in-place.
Applications should not reset LSNs in files that are currently in
use.
Limit the number of sequential write operations scheduled by the
library when flushing dirty pages from the cache.
The maximum number of sequential write operations scheduled by the
library when flushing dirty pages from the cache, or 0 if there is
no limitation on the number of sequential write operations.
The number of microseconds the thread of control should pause before
scheduling further write operations. It must be specified as an
unsigned 32-bit number of microseconds, limiting the maximum pause
to roughly 71 minutes.
Flush all modified pages in the cache to their backing files.
Pages in the cache that cannot be immediately written back to disk
(for example, pages that are currently in use by another thread of
control) are waited for and written to disk as soon as it is
possible to do so.
Flush modified pages in the cache with log sequence numbers less
than to their backing files.
Pages in the cache that cannot be immediately written back to disk
(for example, pages that are currently in use by another thread of
control) are waited for and written to disk as soon as it is
possible to do so.
All modified pages with a log sequence number less than the minLSN
parameter are written to disk. If null, all modified pages in the
cache are written to disk.
Ensure that a specified percent of the pages in the cache are clean,
by writing dirty pages to their backing files.
The percent of the pages in the cache that should be clean.
The number of pages written to reach the specified percentage is
copied.
Append an informational message to the Berkeley DB database
environment log files.
WriteToLog allows applications to include information in the
database environment log files, for later review using the
db_printlog utility. This method is intended for debugging and
performance tuning.
The message to append to the log files
Append an informational message to the Berkeley DB database
environment log files.
WriteToLog allows applications to include information in the
database environment log files, for later review using the
db_printlog utility. This method is intended for debugging and
performance tuning.
The message to append to the log files
If the operation is part of an application-specified transaction,
is a Transaction object returned from
;
otherwise null.
The locking subsystem statistics
The locking subsystem statistics
The locking subsystem statistics
If true, reset statistics after returning their values.
The locking subsystem statistics
The logging subsystem statistics
The logging subsystem statistics
The logging subsystem statistics
If true, reset statistics after returning their values.
The logging subsystem statistics
The memory pool (that is, the buffer cache) subsystem statistics
The memory pool subsystem statistics
The memory pool (that is, the buffer cache) subsystem statistics
If true, reset statistics after returning their values.
The memory pool subsystem statistics
The mutex subsystem statistics
The mutex subsystem statistics
The mutex subsystem statistics
If true, reset statistics after returning their values.
The mutex subsystem statistics
The replication manager statistics
The replication manager statistics
The replication manager statistics
If true, reset statistics after returning their values.
The replication manager statistics
The replication subsystem statistics
The replication subsystem statistics
The replication subsystem statistics
If true, reset statistics after returning their values.
The replication subsystem statistics
The transaction subsystem statistics
The transaction subsystem statistics
The transaction subsystem statistics
If true, reset statistics after returning their values.
The transaction subsystem statistics
Display the locking subsystem statistical information, as described
by .
Display the locking subsystem statistical information, as described
by .
If true, display all available information.
If true, reset statistics after displaying their values.
Display the locking subsystem statistical information, as described
by .
If true, display all available information.
If true, reset statistics after displaying their values.
If true, display the lock conflict matrix.
If true, Display the lockers within hash chains.
If true, display the lock objects within hash chains.
If true, display the locking subsystem parameters.
Display the logging subsystem statistical information, as described
by .
Display the logging subsystem statistical information, as described
by .
If true, display all available information.
If true, reset statistics after displaying their values.
Display the memory pool (that is, buffer cache) subsystem
statistical information, as described by .
Display the memory pool (that is, buffer cache) subsystem
statistical information, as described by .
If true, display all available information.
If true, reset statistics after displaying their values.
Display the memory pool (that is, buffer cache) subsystem
statistical information, as described by .
If true, display all available information.
If true, reset statistics after displaying their values.
If true, display the buffers with hash chains.
Display the mutex subsystem statistical information, as described
by .
Display the mutex subsystem statistical information, as described
by .
If true, display all available information.
If true, reset statistics after displaying their values.
Display the replication manager statistical information, as
described by .
Display the replication manager statistical information, as
described by .
If true, display all available information.
If true, reset statistics after displaying their values.
Display the replication subsystem statistical information, as
described by .
Display the replication subsystem statistical information, as
described by .
If true, display all available information.
If true, reset statistics after displaying their values.
Display the locking subsystem statistical information, as described
by .
Display the locking subsystem statistical information, as described
by .
If true, display all available information.
If true, reset statistics after displaying their values.
Display the locking subsystem statistical information, as described
by .
Display the locking subsystem statistical information, as described
by .
If true, display all available information.
If true, reset statistics after displaying their values.
Display the locking subsystem statistical information, as described
by .
Display the transaction subsystem statistical information, as
described by .
Display the transaction subsystem statistical information, as
described by .
If true, display all available information.
If true, reset statistics after displaying their values.
If true, database operations for which no explicit transaction
handle was specified, and which modify databases in the database
environment, will be automatically enclosed within a transaction.
The size of the shared memory buffer pool -- that is, the cache.
The cache should be the size of the normal working data set of the
application, with some small amount of additional memory for unusual
situations. (Note: the working set is not the same as the number of
pages accessed simultaneously, and is usually much larger.)
The default cache size is 256KB, and may not be specified as less
than 20KB. Any cache size less than 500MB is automatically increased
by 25% to account for buffer pool overhead; cache sizes larger than
500MB are used as specified. The maximum size of a single cache is
4GB on 32-bit systems and 10TB on 64-bit systems. (All sizes are in
powers-of-two, that is, 256KB is 2^18 not 256,000.) For information
on tuning the Berkeley DB cache size, see Selecting a cache size in
the Programmer's Reference Guide.
If true, Berkeley DB Concurrent Data Store applications will perform
locking on an environment-wide basis rather than on a per-database
basis.
If true, Berkeley DB subsystems will create any underlying files, as
necessary.
The array of directories where database files are stored.
The deadlock detector configuration, specifying what lock request(s)
should be rejected. As transactions acquire locks on behalf of a
single locker ID, rejecting a lock request associated with a
transaction normally requires the transaction be aborted.
The algorithm used by the Berkeley DB library to perform encryption
and decryption.
The mechanism for reporting detailed error messages to the
application.
When an error occurs in the Berkeley DB library, a
, or subclass of DatabaseException,
is thrown. In some cases, however, the exception may be insufficient
to completely describe the cause of the error, especially during
initial application debugging.
In some cases, when an error occurs, Berkeley DB will call the given
delegate with additional error information. It is up to the delegate
to display the error message in an appropriate manner.
Setting ErrorFeedback to NULL unconfigures the callback interface.
This error-logging enhancement does not slow performance or
significantly increase application size, and may be run during
normal operation as well as during application debugging.
The prefix string that appears before error messages issued by
Berkeley DB.
For databases opened inside of a DatabaseEnvironment, setting
ErrorPrefix affects the entire environment and is equivalent to
setting .
Setting ErrorPrefix configures operations performed using the
specified object, not all operations performed on the underlying
database.
A delegate which is called to notify the process of specific
Berkeley DB events.
Monitor progress within long running operations.
Some operations performed by the Berkeley DB library can take
non-trivial amounts of time. The Feedback delegate can be used by
applications to monitor progress within these operations. When an
operation is likely to take a long time, Berkeley DB will call the
specified delegate with progress information.
It is up to the delegate to display this information in an
appropriate manner.
If true, flush database writes to the backing disk before returning
from the write system call, rather than flushing database writes
explicitly in a separate system call, as necessary.
This flag may result in inaccurate file modification times and other
file-level information for Berkeley DB database files. This flag
will almost certainly result in a performance decrease on most
systems.
If true, the object is free-threaded; that is, concurrently usable
by multiple threads in the address space.
The database environment home directory.
If true, Berkeley DB will page-fault shared regions into memory when
initially creating or joining a Berkeley DB environment.
In some applications, the expense of page-faulting the underlying
shared memory regions can affect performance. (For example, if the
page-fault occurs while holding a lock, other lock requests can
convoy, and overall throughput may decrease.)
In addition to page-faulting, Berkeley DB will write the shared
regions when creating an environment, forcing the underlying virtual
memory and filesystems to instantiate both the necessary memory and
the necessary disk space. This can also avoid out-of-disk space
failures later on.
The intermediate directory permissions.
The current lock conflicts array.
If true, lock shared Berkeley DB environment files and memory-mapped
databases into memory.
The number of lock table partitions used in the Berkeley DB
environment.
A value, in microseconds, representing lock timeouts.
All timeouts are checked whenever a thread of control blocks on a
lock or when deadlock detection is performed. As timeouts are only
checked when the lock request first blocks or when deadlock
detection is performed, the accuracy of the timeout depends on how
often deadlock detection is performed.
Timeout values specified for the database environment may be
overridden on a per-transaction basis, see
.
The size of the in-memory log buffer, in bytes
The path of a directory to be used as the location of logging files.
Log files created by the Log Manager subsystem will be created in
this directory.
The absolute file mode for created log files. This property is only
useful for the rare Berkeley DB application that does not control
its umask value.
Normally, if Berkeley DB applications set their umask appropriately,
all processes in the application suite will have read permission on
the log files created by any process in the application suite.
However, if the Berkeley DB application is a library, a process
using the library might set its umask to a value preventing other
processes in the application suite from reading the log files it
creates. In this rare case, this property can be used to set the
mode of created log files to an absolute value.
If true, system buffering is turned off for Berkeley DB log files to
avoid double caching.
If true, Berkeley DB will flush log writes to the backing disk
before returning from the write system call, rather than flushing
log writes explicitly in a separate system call, as necessary.
If true, Berkeley DB will automatically remove log files that are no
longer needed.
If true, transaction logs are maintained in memory rather than on
disk. This means that transactions exhibit the ACI (atomicity,
consistency, and isolation) properties, but not D (durability).
If true, all pages of a log file are zeroed when that log file is
created.
The size of the underlying logging area of the Berkeley DB
environment, in bytes.
The maximum cache size
The maximum size of a single file in the log, in bytes. Because
LSN Offsets are unsigned four-byte
values, the size may not be larger than the maximum unsigned
four-byte value.
When the logging subsystem is configured for on-disk logging, the
default size of a log file is 10MB.
When the logging subsystem is configured for in-memory logging, the
default size of a log file is 256KB. In addition, the configured log
buffer size must be larger than the log file size. (The logging
subsystem divides memory configured for in-memory log records into
"files", as database environments configured for in-memory log
records may exchange log records with other members of a replication
group, and those members may be configured to store log records
on-disk.) When choosing log buffer and file sizes for in-memory
logs, applications should ensure the in-memory log buffer size is
large enough that no transaction will ever span the entire buffer,
and avoid a state where the in-memory buffer is full and no space
can be freed because a transaction that started in the first log
"file" is still active.
See Log File Limits in the Programmer's Reference Guide for more
information.
If no size is specified by the application, the size last specified
for the database region will be used, or if no database region
previously existed, the default will be used.
The maximum number of locking entities supported by the Berkeley DB
environment.
The maximum number of locks supported by the Berkeley DB
environment.
The total number of mutexes allocated
The maximum number of locked objects
The number of file descriptors the library will open concurrently
when flushing dirty pages from the cache.
The number of sequential write operations scheduled by the library
when flushing dirty pages from the cache.
The number of active transactions supported by the environment. This
value bounds the size of the memory allocated for transactions.
Child transactions are counted as active until they either commit or
abort.
Transactions that update multiversion databases are not freed until
the last page version that the transaction created is flushed from
cache. This means that applications using multi-version concurrency
control may need a transaction for each page in cache, in the
extreme case.
When all of the memory available in the database environment for
transactions is in use, calls to will
fail (until some active transactions complete). If MaxTransactions
is never set, the database environment is configured to support at
least 100 active transactions.
The maximum file size, in bytes, for a file to be mapped into the
process address space. If no value is specified, it defaults to
10MB.
Files that are opened read-only in the cache (and that satisfy a few
other criteria) are, by default, mapped into the process address
space instead of being copied into the local cache. This can result
in better-than-usual performance because available virtual memory is
normally much larger than the local cache, and page faults are
faster than page copying on many systems. However, it can cause
resource starvation in the presence of limited virtual memory, and
it can result in immense process sizes in the presence of large
databases.
The mutex alignment, in bytes.
The number of additional mutexes allocated.
If true, turn off system buffering of Berkeley DB database files to
avoid double caching.
If true, Berkeley DB will grant all requested mutual exclusion
mutexes and database locks without regard for their actual
availability. This functionality should never be used for purposes
other than debugging.
If true, Berkeley DB will copy read-only database files into the
local cache instead of potentially mapping them into process memory.
If true, Berkeley DB will ignore any panic state in the database
environment. (Database environments in a panic state normally refuse
all attempts to call Berkeley DB functions, throwing
.) This functionality should never
be used for purposes other than debugging.
The number of times that test-and-set mutexes should spin without
blocking. The value defaults to 1 on uniprocessor systems and to 50
times the number of processors on multiprocessor systems.
If true, overwrite files stored in encrypted formats before deleting
them.
Berkeley DB overwrites files using alternating 0xff, 0x00 and 0xff
byte patterns. For file overwriting to be effective, the underlying
file must be stored on a fixed-block filesystem. Systems with
journaling or logging filesystems will require operating system
support and probably modification of the Berkeley DB sources.
If true, allocate region memory from the heap instead of from memory
backed by the filesystem or system shared memory.
If true, Berkeley DB will have checked to see if recovery needed to
be performed before opening the database environment.
The amount of time the replication manager's transport function
waits to collect enough acknowledgments from replication group
clients, before giving up and returning a failure indication. The
default wait time is 1 second.
If true, the replication master sends groups of records to the
clients in a single network transfer
The amount of time a master site will delay between completing a
checkpoint and writing a checkpoint record into the log.
This delay allows clients to complete their own checkpoints before
the master requires completion of them. The default is 30 seconds.
If all databases in the environment, and the environment's
transaction log, are configured to reside in memory (never preserved
to disk), then, although checkpoints are still necessary, the delay
is not useful and should be set to 0.
The value, relative to , of the
fastest clock in the group of sites.
The value of the slowest clock in the group of sites.
The amount of time the replication manager will wait before trying
to re-establish a connection to another site after a communication
failure. The default wait time is 30 seconds.
If true, the client should delay synchronizing to a newly declared
master (defaults to false). Clients configured in this way will remain
unsynchronized until the application calls .
Configure the amount of time the replication manager will wait
before retrying a failed election. The default wait time is 10
seconds.
The timeout period for an election. The default timeout is 2
seconds.
An optional configuration timeout period to wait for full election
participation the first time the replication group finds a master.
By default this option is turned off and normal election timeouts
are used. (See the Elections section in the Berkeley DB Reference
Guide for more information.)
The amount of time the replication manager, running at a client
site, waits for some message activity on the connection from the
master (heartbeats or other messages) before concluding that the
connection has been lost. When 0 (the default), no monitoring is
performed.
The frequency at which the replication manager, running at a master
site, broadcasts a heartbeat message in an otherwise idle system.
When 0 (the default), no heartbeat messages will be sent.
The amount of time a client grants its master lease to a master.
When using master leases all sites in a replication group must use
the same lease timeout value. There is no default value. If leases
are desired, this method must be called prior to calling
or .
Specify how master and client sites will handle acknowledgment of
replication messages which are necessary for "permanent" records.
The current implementation requires all sites in a replication group
configure the same acknowledgement policy.
The host information for the local system.
The status of the sites currently known by the replication manager.
If true, the replication master will not automatically re-initialize
outdated clients (defaults to false).
If true, Berkeley DB method calls that would normally block while
clients are in recovery will return errors immediately (defaults to
false).
The total number of sites in the replication group.
This setting is typically used by applications which use the
Berkeley DB library "replication manager" support. (However, see
also , the description of the nsites
parameter.)
The database environment's priority in replication group elections.
A special value of 0 indicates that this environment cannot be a
replication group master. If not configured, then a default value
of 100 is used.
The minimum number of microseconds a client waits before requesting
retransmission.
The maximum number of microseconds a client waits before requesting
retransmission.
Replication Manager observes the strict "majority" rule in managing
elections, even in a group with only 2 sites. This means the client
in a 2-site group will be unable to take over as master if the
original master fails or becomes disconnected. (See the Elections
section in the Berkeley DB Reference Guide for more information.)
Both sites in the replication group should have the same value for
this parameter.
The gigabytes component of the byte-count limit on the amount of
data that will be transmitted from a site in response to a single
message processed by .
The bytes component of the byte-count limit on the amount of data
that will be transmitted from a site in response to a single
message processed by .
The delegate used to transmit data using the replication
application's communication infrastructure.
If true, master leases will be used for this site (defaults to
false).
Configuring this option may result in a
when attempting to read entries
from a database after the site's master lease has expired.
If true, catastrophic recovery was run on this environment before
opening it for normal use.
If true, normal recovery was run on this environment before opening
it for normal use.
The number of microseconds the thread of control will pause before
scheduling further write operations.
A delegate that returns a unique identifier pair for the current
thread of control.
This delegate supports . For more
information, see Architecting Data Store and Concurrent Data Store
applications, and Architecting Transactional Data Store
applications, both in the Berkeley DB Programmer's Reference Guide.
A delegate that formats a process ID and thread ID identifier pair.
If true, allocate region memory from system shared memory instead of
from heap memory or memory backed by the filesystem.
The path of a directory to be used as the location of temporary
files.
The files created to back in-memory access method databases will be
created relative to this path. These temporary files can be quite
large, depending on the size of the database.
If no directories are specified, the following alternatives are
checked in the specified order. The first existing directory path is
used for all temporary files.
- The value of the environment variable TMPDIR.
- The value of the environment variable TEMP.
- The value of the environment variable TMP.
- The value of the environment variable TempFolder.
- The value returned by the GetTempPath interface.
- The directory /var/tmp.
- The directory /usr/tmp.
- The directory /temp.
- The directory /tmp.
- The directory C:/temp.
- The directory C:/tmp.
Environment variables are only checked if
is true.
An approximate number of threads in the database environment.
A delegate that returns if a thread of control (either a true thread
or a process) is still running.
If true, database calls timing out based on lock or transaction
timeout values will throw
instead of .
If true, this allows applications to distinguish between operations
which have deadlocked and operations which have exceeded their time
limits.
If true, Berkeley DB will not write or synchronously flush the log
on transaction commit.
This means that transactions exhibit the ACI (atomicity,
consistency, and isolation) properties, but not D (durability); that
is, database integrity will be maintained, but if the application or
system fails, it is possible some number of the most recently
committed transactions may be undone during recovery. The number of
transactions at risk is governed by how many log updates can fit
into the log buffer, how often the operating system flushes dirty
buffers to disk, and how often the log is checkpointed.
If true and a lock is unavailable for any Berkeley DB operation
performed in the context of a transaction, cause the operation to
throw (or
if configured with
).
If true, all transactions in the environment will be started as if
was passed to
, and all non-transactional cursors
will be opened as if
was passed to .
A value, in microseconds, representing transaction timeouts.
All timeouts are checked whenever a thread of control blocks on a
lock or when deadlock detection is performed. As timeouts are only
checked when the lock request first blocks or when deadlock
detection is performed, the accuracy of the timeout depends on how
often deadlock detection is performed.
Timeout values specified for the database environment may be
overridden on a per-transaction basis, see
.
The recovery timestamp
If true, Berkeley DB will write, but will not synchronously flush,
the log on transaction commit.
This means that transactions exhibit the ACI (atomicity,
consistency, and isolation) properties, but not D (durability); that
is, database integrity will be maintained, but if the system fails,
it is possible some number of the most recently committed
transactions may be undone during recovery. The number of
transactions at risk is governed by how often the system flushes
dirty buffers to disk and how often the log is checkpointed.
If true, all databases in the environment will be opened as if
was set.
This flag will be ignored for queue databases for which MVCC is not
supported.
If true, locking for the Berkeley DB Concurrent Data Store product
was initialized.
If true, the locking subsystem was initialized.
If true, the logging subsystem was initialized.
If true, the shared memory buffer pool subsystem was initialized.
If true, the replication subsystem was initialized.
If true, the transaction subsystem was initialized.
Specific additional informational and debugging messages in the
Berkeley DB message output.
If true, Berkeley DB will yield the processor immediately after each
page or mutex acquisition.
This functionality should never be used for purposes other than
stress testing.
The Berkeley DB process' environment may be permitted to specify
information to be used when naming files; see Berkeley DB File
Naming in the Programmer's Reference Guide for more information.
Statistical information about the replication subsystem
Log records currently queued.
Site completed client sync-up.
Current replication status.
Next LSN to use or expect.
LSN we're awaiting, if any.
Maximum permanent LSN.
Next pg we expect.
pg we're awaiting, if any.
# of times a duplicate master condition was detected.
Current environment ID.
Current environment priority.
Bulk buffer fills.
Bulk buffer overflows.
Bulk records stored.
Transfers of bulk buffers.
Number of forced rerequests.
Number of client service requests received by this client.
Number of client service requests missing on this client.
Current generation number.
Current election gen number.
Log records received multiply.
Max. log records queued at once.
Total # of log recs. ever queued.
Log records received and put.
Log recs. missed and requested.
Env. ID of the current master.
# of times we've switched masters.
Messages with a bad generation #.
Messages received and processed.
Messages ignored because this site was a client in recovery.
# of failed message sends.
# of successful message sends.
# of NEWSITE msgs. received.
Current number of sites we will assume during elections.
# of times we were throttled.
# of times we detected and returned an OUTDATED condition.
Pages received multiply.
Pages received and stored.
Pages missed and requested.
# of transactions applied.
# of STARTSYNC msgs delayed.
# of elections held.
# of elections won by this site.
Current front-runner.
Election generation number.
Max. LSN of current winner.
# of "registered voters".
# of "registered voters" needed.
Current election priority.
Current election status.
Election tiebreaker value.
Votes received in this round.
Last election time seconds.
Last election time useconds.
Maximum lease timestamp seconds.
Maximum lease timestamp useconds.
Constants representing error codes returned by the Berkeley DB library.
User memory too small for return.
"Null" return from 2ndary callbk.
A foreign db constraint triggered.
Key/data deleted or never created.
The key/data pair already exists.
Deadlock.
Lock unavailable.
In-memory log buffer full.
Server panic return.
Bad home sent to server.
Bad ID sent to server.
Key/data pair not found (EOF).
Out-of-date version.
Requested page not found.
There are two masters.
Rolled back a commit.
Time to hold an election.
This msg should be ignored.
Cached not written perm written.
Unable to join replication group.
Master lease has expired.
API/Replication lockout now.
New site entered system.
Permanent log record not written.
Site cannot currently be reached.
Panic return.
Secondary index corrupt.
Verify failed; bad format.
Environment version mismatch.
The ActiveTransaction class describes a currently active transaction.
The transaction ID of the transaction.
The transaction ID of the parent transaction (or 0, if no parent).
The process ID of the originator of the transaction.
The thread of control ID of the originator of the transaction.
The current log sequence number when the transaction was begun.
The log sequence number of reads for snapshot transactions.
The number of MVCC buffer copies created by this transaction that
remain in cache.
Status of the transaction.
If the transaction is a prepare transaction, the transaction's
Global ID. Otherwise, the GlobalID contents are undefined.
If a name was specified for the transaction, up to the first 50
bytes of that name.
The status of an active transaction.
The transaction has been aborted
The transaction has been committed
The transaction has been prepared
The transaction is running
A class representing configuration parameters for
A class representing configuration parameters for
A class representing configuration parameters for
The Berkeley DB environment within which to create a database. If
null, the database will be created stand-alone; that is, it is not
part of any Berkeley DB environment.
The database access methods automatically make calls to the other
subsystems in Berkeley DB, based on the enclosing environment. For
example, if the environment has been configured to use locking, the
access methods will automatically acquire the correct locks when
reading and writing pages of the database.
The cache priority for pages referenced by the database.
The priority of a page biases the replacement algorithm to be more
or less likely to discard a page when space is needed in the buffer
pool. The bias is temporary, and pages will eventually be discarded
if they are not referenced again. This priority is only advisory,
and does not guarantee pages will be treated in a specific way.
The size of the shared memory buffer pool -- that is, the cache.
The cache should be the size of the normal working data set of the
application, with some small amount of additional memory for unusual
situations. (Note: the working set is not the same as the number of
pages accessed simultaneously, and is usually much larger.)
The default cache size is 256KB, and may not be specified as less
than 20KB. Any cache size less than 500MB is automatically increased
by 25% to account for buffer pool overhead; cache sizes larger than
500MB are used as specified. The maximum size of a single cache is
4GB on 32-bit systems and 10TB on 64-bit systems. (All sizes are in
powers-of-two, that is, 256KB is 2^18 not 256,000.) For information
on tuning the Berkeley DB cache size, see Selecting a cache size in
the Programmer's Reference Guide.
The byte order for integers in the stored database metadata. The
host byte order of the machine where the Berkeley DB library was
compiled is the default value.
The access methods provide no guarantees about the byte ordering of
the application data stored in the database, and applications are
responsible for maintaining any necessary ordering.
If creating additional databases in a single physical file, this
parameter will be ignored and the byte order of the existing
databases will be used.
Set the password and algorithm used by the Berkeley DB library to
perform encryption and decryption.
The password used to perform encryption and decryption.
The algorithm used to perform encryption and decryption.
The prefix string that appears before error messages issued by
Berkeley DB.
The mechanism for reporting error messages to the application.
In some cases, when an error occurs, Berkeley DB will call
ErrorFeedback with additional error information. It is up to the
delegate function to display the error message in an appropriate
manner.
This error-logging enhancement does not slow performance or
significantly increase application size, and may be run during
normal operation as well as during application debugging.
For databases opened inside of Berkeley DB environments, setting
ErrorFeedback affects the entire environment and is equivalent to
setting .
If true, do checksum verification of pages read into the cache from
the backing filestore.
Berkeley DB uses the SHA1 Secure Hash Algorithm if encryption is
configured and a general hash algorithm if it is not.
If the database already exists, this setting will be ignored.
If true, Berkeley DB will not write log records for this database.
If Berkeley DB does not write log records, updates of this database
will exhibit the ACI (atomicity, consistency, and isolation)
properties, but not D (durability); that is, database integrity will
be maintained, but if the application or system fails, integrity
will not persist. The database file must be verified and/or restored
from backup after a failure. In order to ensure integrity after
application shut down, the database must be synced when closed, or
all database changes must be flushed from the database environment
cache using either
or
. All database objects
for a single physical file must set NonDurableTxns, including
database objects for different databases in a physical file.
Enclose the open call within a transaction. If the call succeeds,
the open operation will be recoverable and all subsequent database
modification operations based on this handle will be transactionally
protected. If the call fails, no database will have been created.
Cause the database object to be free-threaded; that is, concurrently
usable by multiple threads in the address space.
Do not map this database into process memory.
Open the database for reading only. Any attempt to modify items in
the database will fail, regardless of the actual permissions of any
underlying files.
Support transactional read operations with degree 1 isolation.
Read operations on the database may request the return of modified
but not yet committed data. This flag must be specified on all
database objects used to perform dirty reads or database updates,
otherwise requests for dirty reads may not be honored and the read
may block.
Physically truncate the underlying file, discarding all previous databases it might have held.
Underlying filesystem primitives are used to implement this flag.
For this reason, it is applicable only to the file and cannot be
used to discard databases within a file.
This setting cannot be lock or transaction-protected, and it is an
error to specify it in a locking or transaction-protected
environment.
Open the database with support for multiversion concurrency control.
This will cause updates to the database to follow a copy-on-write
protocol, which is required to support snapshot isolation. This
settting requires that the database be transactionally protected
during its open and is not supported by the queue format.
Instantiate a new DatabaseConfig object
The size of the pages used to hold items in the database, in bytes.
The minimum page size is 512 bytes, the maximum page size is 64K
bytes, and the page size must be a power-of-two. If the page size is
not explicitly set, one is selected based on the underlying
filesystem I/O block size. The automatically selected size has a
lower limit of 512 bytes and an upper limit of 16K bytes.
For information on tuning the Berkeley DB page size, see Selecting a
page size in the Programmer's Reference Guide.
If creating additional databases in a single physical file, this
parameter will be ignored and the page size of the existing
databases will be used.
The password used to perform encryption and decryption.
The algorithm used to perform encryption and decryption.
If true and the secondary database is empty, walk through Primary
and create an index to it in the empty secondary. This operation is
potentially very expensive.
If the secondary database has been opened in an environment
configured with transactions, the entire secondary index creation is
performed in the context of a single transaction.
Care should be taken not to use a newly-populated secondary database
in another thread of control until
has returned successfully in
the first thread.
If transactions are not being used, care should be taken not to
modify a primary database being used to populate a secondary
database, in another thread of control, until
has returned successfully in
the first thread. If transactions are being used, Berkeley DB will
perform appropriate locking and the application need not do any
special operation ordering.
If true, the secondary key is immutable.
This setting can be used to optimize updates when the secondary key
in a primary record will never be changed after the primary record
is inserted. For immutable secondary keys, a best effort is made to
avoid calling the secondary callback function when primary records
are updated. This optimization may reduce the overhead of update
operations significantly if the callback function is expensive.
Be sure to specify this setting only if the secondary key in the
primary record is never changed. If this rule is violated, the
secondary index will become corrupted, that is, it will become out
of sync with the primary.
Instantiate a new SecondaryDatabaseConfig object, with the default
configuration settings.
All updates to Primary will be automatically reflected in the
secondary and all reads from the secondary will return corresponding
data from Primary.
Note that as primary keys must be unique for secondary indices to
work, Primary must have been configured with
.
The delegate that creates the set of secondary keys corresponding to
a given primary key and data pair.
KeyGen may be null if both
Primary.ReadOnly and
are true.
Cause the logical record numbers to be mutable, and change as
records are added to and deleted from the database.
For example, the deletion of record number 4 causes records numbered
5 and greater to be renumbered downward by one. If a cursor was
positioned to record number 4 before the deletion, it will refer to
the new record number 4, if any such record exists, after the
deletion. If a cursor was positioned after record number 4 before
the deletion, it will be shifted downward one logical record,
continuing to refer to the same record as it did before.
Using or to
create new records will cause the creation of multiple records if
the record number is more than one greater than the largest record
currently in the database. For example, creating record 28, when
record 25 was previously the last record in the database, will
create records 26 and 27 as well as 28. Attempts to retrieve records
that were created in this manner will throw a
.
If a created record is not at the end of the database, all records
following the new record will be automatically renumbered upward by
one. For example, the creation of a new record numbered 8 causes
records numbered 8 and greater to be renumbered upward by one. If a
cursor was positioned to record number 8 or greater before the
insertion, it will be shifted upward one logical record, continuing
to refer to the same record as it did before.
For these reasons, concurrent access to a
with this setting specified may
be largely meaningless, although it is supported.
If the database already exists, this setting must be the same as the
existing database or an exception will be thrown.
If true, any file will be read in its
entirety when is called.
If false, may be read lazily.
The policy for how to handle database creation.
If the database does not already exist and
is set,
will fail.
The underlying source file for the Recno access method.
The purpose of the source file is to provide fast access and
modification to databases that are normally stored as flat text
files.
The source parameter specifies an underlying flat text database file
that is read to initialize a transient record number index. In the
case of variable length records, the records are separated, as
specified by . For example, standard UNIX
byte stream files can be interpreted as a sequence of variable
length records separated by newline characters.
In addition, when cached data would normally be written back to the
underlying database file (for example,
or
), the in-memory copy of the
database will be written back to the source file.
By default, the backing source file is read lazily; that is, records
are not read from the file until they are requested by the
application. If multiple processes (not threads) are accessing a
Recno database concurrently, and are either inserting or deleting
records, the backing source file must be read in its entirety before
more than a single process accesses the database, and only that
process should specify the backing source file as part of the
call. See
for more information.
Reading and writing the backing source file specified by source
cannot be transaction-protected because it involves filesystem
operations that are not part of the Db transaction methodology. For
this reason, if a temporary database is used to hold the records, it
is possible to lose the contents of the source file, for example, if
the system crashes at the right instant. If a file is used to hold
the database, normal database recovery on that file can be used to
prevent information loss, although it is still possible that the
contents of source will be lost if the system crashes.
The source file must already exist (but may be zero-length) when
is called.
It is not an error to specify a read-only source file when creating
a database, nor is it an error to modify the resulting database.
However, any attempt to write the changes to the backing source file
using either the or
methods will fail, of course.
Use to stop it from
attempting to write the changes to the backing file; instead, they
will be silently discarded.
For all of the previous reasons, the source file is generally used
to specify databases that are read-only for Berkeley DB
applications; and that are either generated on the fly by software
tools or modified using a different mechanism — for example, a text
editor.
If the database already exists, BackingFile must be the same as that
historically used to create the database or corruption can occur.
Instantiate a new SecondaryRecnoDatabaseConfig object
The delimiting byte used to mark the end of a record in
.
This byte is used for variable length records if
is set. If is
specified and no delimiting byte was specified, newline characters
(that is, ASCII 0x0a) are interpreted as end-of-record markers.
If the database already exists, this setting will be ignored.
Specify that the records are fixed-length, not byte-delimited, and
are of length Length.
Any records added to the database that are less than Length bytes
long are automatically padded (see for more
information).
Any attempt to insert records into the database that are greater
than Length bytes long will cause the call to fail immediately and
return an error.
If the database already exists, this setting will be ignored.
The padding character for short, fixed-length records.
If no pad character is specified, space characters (that is, ASCII
0x20) are used for padding.
If the database already exists, this setting will be ignored.
A class providing access to multiple
objects.
Return an enumerator which iterates over all
objects represented by the
.
An enumerator for the
A class to represent cache priority for database pages
The lowest priority: pages are the most likely to be discarded.
The next lowest priority.
The default priority.
The next highest priority.
The highest priority: pages are the least likely to be discarded.
Statistical information about a Sequence
Cache size.
Current cached value.
Flag value.
Last cached value.
Sequence lock granted w/o wait.
Sequence lock granted after wait.
Maximum value.
Minimum value.
Current value in db.
A class representing configuration parameters for
If true, modify the operation of
to return key/data pairs in order. That is, they will always return
the key/data item from the head of the queue.
The default behavior of queue databases is optimized for multiple
readers, and does not guarantee that record will be retrieved in the
order they are added to the queue. Specifically, if a writing thread
adds multiple records to an empty queue, reading threads may skip
some of the initial records when the next
call returns.
This setting modifies to verify
that the record being returned is in fact the head of the queue.
This will increase contention and reduce concurrency when there are
many reading threads.
The policy for how to handle database creation.
If the database does not already exist and
is set,
will fail.
A function to call after the record number has been selected but
before the data has been stored into the database.
When using , it may be useful to
modify the stored data based on the generated key. If a delegate is
specified, it will be called after the record number has been
selected, but before the data has been stored.
Instantiate a new QueueDatabaseConfig object
Specify the length of records in the database.
The record length must be enough smaller than
that at least one record plus
the database page's metadata information can fit on each database
page.
Any records added to the database that are less than Length bytes
long are automatically padded (see for more
information).
Any attempt to insert records into the database that are greater
than Length bytes long will cause the call to fail immediately and
return an error.
If the database already exists, this setting will be ignored.
The padding character for short, fixed-length records.
If no pad character is specified, space characters (that is, ASCII
0x20) are used for padding.
If the database already exists, this setting will be ignored.
The size of the extents used to hold pages in a
, specified as a number of pages.
Each extent is created as a separate physical file. If no extent
size is set, the default behavior is to create only a single
underlying database file.
For information on tuning the extent size, see Selecting a extent
size in the Programmer's Reference Guide.
If the database already exists, this setting will be ignored.
A class representing configuration parameters for
Policy for duplicate data items in the database; that is, insertion
when the key of the key/data pair being inserted already exists in
the database will be successful.
The ordering of duplicates in the database for
is determined by the order
of insertion, unless the ordering is otherwise specified by use of a
cursor operation or a duplicate sort function. The ordering of
duplicates in the database for
is determined by the
duplicate comparison function. If the application does not specify a
comparison function using
, a default lexical
comparison will be used.
is preferred to
for performance reasons.
should only be used by
applications wanting to order duplicate data items manually.
If the database already exists, the value of Duplicates must be the
same as the existing database or an error will be returned.
The policy for how to handle database creation.
If the database does not already exist and
is set,
will fail.
The Hash key comparison function.
The comparison function is called whenever it is necessary to
compare a key specified by the application with a key currently
stored in the tree.
If no comparison function is specified, the keys are compared
lexically, with shorter keys collating before longer keys.
If the database already exists, the comparison function must be the
same as that historically used to create the database or corruption
can occur.
A user-defined hash function; if no hash function is specified, a
default hash function is used.
Because no hash function performs equally well on all possible data,
the user may find that the built-in hash function performs poorly
with a particular data set.
If the database already exists, HashFunction must be the same as
that historically used to create the database or corruption can
occur.
The duplicate data item comparison function.
The comparison function is called whenever it is necessary to
compare a data item specified by the application with a data item
currently stored in the database. Setting DuplicateCompare implies
setting to
.
If no comparison function is specified, the data items are compared
lexically, with shorter data items collating before longer data
items.
If the database already exists when
is called, the delegate must be the same as that historically used
to create the database or corruption can occur.
Instantiate a new HashDatabaseConfig object
The desired density within the hash table. If no value is specified,
the fill factor will be selected dynamically as pages are filled.
The density is an approximation of the number of keys allowed to
accumulate in any one bucket, determining when the hash table grows
or shrinks. If you know the average sizes of the keys and data in
your data set, setting the fill factor can enhance performance. A
reasonable rule computing fill factor is to set it to the following:
(pagesize - 32) / (average_key_size + average_data_size + 8)
If the database already exists, this setting will be ignored.
An estimate of the final size of the hash table.
In order for the estimate to be used when creating the database,
must also be set. If the estimate or fill
factor are not set or are set too low, hash tables will still expand
gracefully as keys are entered, although a slight performance
degradation may be noticed.
If the database already exists, this setting will be ignored.
A class representing a RecnoDatabase. The Recno format supports fixed-
or variable-length records, accessed sequentially or by logical record
number, and optionally backed by a flat text file.
A class representing a secondary Berkeley DB database, a base class for
access method specific classes.
Protected construtor
The environment in which to open the DB
Flags to pass to DB->create
Protected method to configure the DB. Only valid before DB->open.
Configuration parameters.
Instantiate a new SecondaryDatabase object, open the database
represented by and associate the
database with the
primary index. The file specified by
must exist.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database.
The database's configuration
A new, open database object
Instantiate a new SecondaryDatabase object, open the database
represented by and associate the
database with the
primary index. The file specified by
must exist.
If is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
A new, open database object
Instantiate a new SecondaryDatabase object, open the database
represented by and associate the
database with the
primary index. The file specified by
must exist.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
Instantiate a new SecondaryDatabase object, open the database
represented by and associate the
database with the
primary index. The file specified by
must exist.
If is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
Protected method to call the key generation function.
Secondary DB Handle
Primary Key
Primary Data
Scondary Key
0 on success, !0 on failure
Protected method to nullify a foreign key
Secondary DB Handle
Primary Key
Primary Data
Foreign Key
Whether the foreign key has changed
0 on success, !0 on failure
Create a secondary database cursor.
A newly created cursor
Create a secondary database cursor with the given configuration.
The configuration properties for the cursor.
A newly created cursor
Create a transactionally protected secondary database cursor.
The transaction context in which the cursor may be used.
A newly created cursor
Create a transactionally protected secondary database cursor with
the given configuration.
The configuration properties for the cursor.
The transaction context in which the cursor may be used.
A newly created cursor
The delegate that creates the set of secondary keys corresponding to
a given primary key and data pair.
Instantiate a new SecondaryRecnoDatabase object, open the
database represented by and associate
the database with the
primary index.
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
A new, open database object
Instantiate a new SecondaryRecnoDatabase object, open the
database represented by and associate
the database with the
primary index.
If both and
are null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe. If
is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
A new, open database object
Instantiate a new SecondaryRecnoDatabase object, open the
database represented by and associate
the database with the
primary index.
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
Instantiate a new SecondaryRecnoDatabase object, open the
database represented by and associate
the database with the
primary index.
If both and
are null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe. If
is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
If true, the logical record numbers are mutable, and change as
records are added to and deleted from the database.
If true, any file will be read in its
entirety when is called. If false,
may be read lazily.
The delimiting byte used to mark the end of a record in
.
If using fixed-length, not byte-delimited records, the length of the
records.
The padding character for short, fixed-length records.
The underlying source file for the Recno access method.
Statistical information about the transaction subsystem
Number of aborted transactions
Number of active transactions
Number of begun transactions
Number of committed transactions
LSN of the last checkpoint
Time of last checkpoint
Last transaction id given out
Maximum active transactions
Maximum snapshot transactions
Maximum txns possible
Region lock granted without wait.
Region size.
Region lock granted after wait.
Number of restored transactions after recovery.
Number of snapshot transactions
List of active transactions
A class for traversing the records of a
A class representing database cursors, which allow for traversal of
database records.
The abstract base class from which all cursor classes inherit.
Cursors may span threads, but only serially, that is, the application
must serialize access to the cursor handle.
The underlying DBC handle
Compare this cursor's position to another's.
The cursor with which to compare.
True if both cursors point to the same item, false otherwise.
Returns a count of the number of data items for the key to which the
cursor refers.
A count of the number of data items for the key to which the cursor
refers.
Discard the cursor.
It is possible for the Close() method to throw a
, signaling that any enclosing
transaction should be aborted. If the application is already
intending to abort the transaction, this error should be ignored,
and the application should proceed.
After Close has been called, regardless of its result, the object
may not be used again.
Release the resources held by this object, and close the cursor if
it's still open.
Delete the key/data pair to which the cursor refers.
When called on a SecondaryCursor, delete the key/data pair from the
primary database and all secondary indices.
The cursor position is unchanged after a delete, and subsequent
calls to cursor functions expecting the cursor to refer to an
existing key will fail.
Thrown if the element has already been deleted.
Returns an enumerator that iterates through the cursor.
An enumerator for the cursor.
Protected member, storing the pagesize of the underlying database.
Used during bulk get (i.e. Move*Multiple).
Protected method for BTree and Hash to insert with KEYFIRST and
KEYLAST.
The key/data pair to add
Where to add, if adding duplicate data
Protected method for BTree and Hash to insert with NODUPDATA.
The key/data pair to add
Protected method for BTree, Hash and Recno to insert with AFTER and
BEFORE.
The duplicate data item to add
Whether to add the dup data before or after the current cursor
position
Protected method wrapping DBC->get.
The key to retrieve
The data to retrieve
Modify the behavior of get
The locking configuration to use
True if the cursor was positioned successfully, false otherwise.
Protected method wrapping DBC->get for bulk get.
The key to retrieve
The data to retrieve
Size of the bulk buffer
Modify the behavior of get
The locking configuration to use
If true, use DB_MULTIPLE_KEY instead of DB_MULTIPLE
True if the cursor was positioned successfully, false otherwise.
Protected method wrapping DBC->put.
The key to store
The data to store
Modify the behavior of put
Stores the key/data pair in the database.
If the underlying database supports duplicate data items, and if the
key already exists in the database and a duplicate sort function has
been specified, the inserted data item is added in its sorted
location. If the key already exists in the database and no duplicate
sort function has been specified, the inserted data item is added as
the first of the data items for that key.
The key/data pair to be stored in the database.
Delete the key/data pair to which the cursor refers.
The cursor position is unchanged after a delete, and subsequent
calls to cursor functions expecting the cursor to refer to an
existing key will fail.
The element has already been deleted.
Create a new cursor that uses the same transaction and locker ID as
the original cursor.
This is useful when an application is using locking and requires two
or more cursors in the same thread of control.
If true, the newly created cursor is initialized to refer to the
same position in the database as the original cursor (if any) and
hold the same locks (if any). If false, or the original cursor does
not hold a database position and locks, the created cursor is
uninitialized and will behave like a cursor newly created by
.
A newly created cursor
Returns an enumerator that iterates through the
.
The enumerator will begin at the cursor's current position (or the
first record if the cursor has not yet been positioned) and iterate
forwards (i.e. in the direction of ) over the
remaining records.
An enumerator for the Cursor.
Set the cursor to refer to the first key/data pair of the database,
and store that pair in . If the first key has
duplicate values, the first data item in the set of duplicates is
stored in .
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to the first key/data pair of the database,
and store that pair in . If the first key has
duplicate values, the first data item in the set of duplicates is
stored in .
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to the first key/data pair of the database,
and store that key and as many duplicate data items that can fit in
a buffer the size of one database page in
.
If positioning the cursor fails, will
contain an empty
.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to the first key/data pair of the database,
and store that key and as many duplicate data items that can fit in
a buffer the size of in
.
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to the first key/data pair of the database,
and store that key and as many duplicate data items that can fit in
a buffer the size of one database page in
.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to the first key/data pair of the database,
and store that key and as many duplicate data items that can fit in
a buffer the size of in
.
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to the first key/data pair of the database,
and store that pair and as many ensuing key/data pairs that can fit
in a buffer the size of one database page in
.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to the first key/data pair of the database,
and store that pair and as many ensuing key/data pairs that can fit
in a buffer the size of in
.
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to the first key/data pair of the database,
and store that pair and as many ensuing key/data pairs that can fit
in a buffer the size of one database page in
.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to the first key/data pair of the database,
and store that pair and as many ensuing key/data pairs that can fit
in a buffer the size of in
.
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to , and store the
datum associated with the given key in . In the
presence of duplicate key values, the first data item in the set of
duplicates is stored in .
If positioning the cursor fails, will contain
an empty .
The key at which to position the cursor
If true, require the given key to match the key in the database
exactly. If false, position the cursor at the smallest key greater
than or equal to the specified key, permitting partial key matches
and range searches.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to , and store the
datum associated with the given key in . In the
presence of duplicate key values, the first data item in the set of
duplicates is stored in .
If positioning the cursor fails, will contain
an empty .
The key at which to position the cursor
If true, require the given key to match the key in the database
exactly. If false, position the cursor at the smallest key greater
than or equal to the specified key, permitting partial key matches
and range searches.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Move the cursor to the specified key/data pair of the database. The
cursor is positioned to a key/data pair if both the key and data
match the values provided on the key and data parameters.
If positioning the cursor fails, will contain
an empty .
If this flag is specified on a database configured without sorted
duplicate support, the value of is ignored.
The key/data pair at which to position the cursor.
If true, require the given key and data to match the key and data
in the database exactly. If false, position the cursor at the
smallest data value which is greater than or equal to the value
provided by (as determined by the
comparison function).
True if the cursor was positioned successfully, false otherwise.
Move the cursor to the specified key/data pair of the database. The
cursor is positioned to a key/data pair if both the key and data
match the values provided on the key and data parameters.
If positioning the cursor fails, will contain
an empty .
If this flag is specified on a database configured without sorted
duplicate support, the value of is ignored.
The key/data pair at which to position the cursor.
If true, require the given key and data to match the key and data
in the database exactly. If false, position the cursor at the
smallest data value which is greater than or equal to the value
provided by (as determined by the
comparison function).
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to the last key/data pair of the database,
and store that pair in . If the last key has
duplicate values, the last data item in the set of duplicates is
stored in .
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to the last key/data pair of the database,
and store that pair in . If the last key has
duplicate values, the last data item in the set of duplicates is
stored in .
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to , and store that
key and as many duplicate data items associated with the given key that
can fit in a buffer the size of one database page in
.
The key at which to position the cursor
If true, require the given key to match the key in the database
exactly. If false, position the cursor at the smallest key greater
than or equal to the specified key, permitting partial key matches
and range searches.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to , and store that
key and as many duplicate data items associated with the given key that
can fit in a buffer the size of in
.
The key at which to position the cursor
If true, require the given key to match the key in the database
exactly. If false, position the cursor at the smallest key greater
than or equal to the specified key, permitting partial key matches
and range searches.
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to , and store that
key and as many duplicate data items associated with the given key that
can fit in a buffer the size of one database page in
.
The key at which to position the cursor
If true, require the given key to match the key in the database
exactly. If false, position the cursor at the smallest key greater
than or equal to the specified key, permitting partial key matches
and range searches.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to , and store that
key and as many duplicate data items associated with the given key that
can fit in a buffer the size of in
.
The key at which to position the cursor
If true, require the given key to match the key in the database
exactly. If false, position the cursor at the smallest key greater
than or equal to the specified key, permitting partial key matches
and range searches.
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Move the cursor to the specified key/data pair of the database, and
store that key/data pair and as many duplicate data items associated
with the given key that can fit in a buffer the size of one database
page in . The cursor is positioned to a
key/data pair if both the key and data match the values provided on
the key and data parameters.
The key/data pair at which to position the cursor.
If true, require the given key and data to match the key and data
in the database exactly. If false, position the cursor at the
smallest data value which is greater than or equal to the value
provided by (as determined by the
comparison function).
True if the cursor was positioned successfully, false otherwise.
Move the cursor to the specified key/data pair of the database, and
store that key/data pair and as many duplicate data items associated
with the given key that can fit in a buffer the size of
in . The
cursor is positioned to a key/data pair if both the key and data
match the values provided on the key and data parameters.
The key/data pair at which to position the cursor.
If true, require the given key and data to match the key and data
in the database exactly. If false, position the cursor at the
smallest data value which is greater than or equal to the value
provided by (as determined by the
comparison function).
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
True if the cursor was positioned successfully, false otherwise.
Move the cursor to the specified key/data pair of the database, and
store that key/data pair and as many duplicate data items associated
with the given key that can fit in a buffer the size of one database
page in . The cursor is positioned to a
key/data pair if both the key and data match the values provided on
the key and data parameters.
The key/data pair at which to position the cursor.
If true, require the given key and data to match the key and data
in the database exactly. If false, position the cursor at the
smallest data value which is greater than or equal to the value
provided by (as determined by the
comparison function).
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Move the cursor to the specified key/data pair of the database, and
store that key/data pair and as many duplicate data items associated
with the given key that can fit in a buffer the size of
in . The
cursor is positioned to a key/data pair if both the key and data
match the values provided on the key and data parameters.
The key/data pair at which to position the cursor.
If true, require the given key and data to match the key and data
in the database exactly. If false, position the cursor at the
smallest data value which is greater than or equal to the value
provided by (as determined by the
comparison function).
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to , and store that
key and as many ensuing key/data pairs that can fit in a buffer the
size of one database page in .
The key at which to position the cursor
If true, require the given key to match the key in the database
exactly. If false, position the cursor at the smallest key greater
than or equal to the specified key, permitting partial key matches
and range searches.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to , and store that
key and as many ensuing key/data pairs that can fit in a buffer the
size of in
.
The key at which to position the cursor
If true, require the given key to match the key in the database
exactly. If false, position the cursor at the smallest key greater
than or equal to the specified key, permitting partial key matches
and range searches.
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to , and store that
key and as many ensuing key/data pairs that can fit in a buffer the
size of one database page in .
The key at which to position the cursor
If true, require the given key to match the key in the database
exactly. If false, position the cursor at the smallest key greater
than or equal to the specified key, permitting partial key matches
and range searches.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to , and store that
key and as many ensuing key/data pairs that can fit in a buffer the
size of in
.
The key at which to position the cursor
If true, require the given key to match the key in the database
exactly. If false, position the cursor at the smallest key greater
than or equal to the specified key, permitting partial key matches
and range searches.
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Move the cursor to the specified key/data pair of the database, and
store that key/data pair and as many ensuing key/data pairs that can
fit in a buffer the size of one database page in
. The cursor is positioned to a
key/data pair if both the key and data match the values provided on
the key and data parameters.
The key/data pair at which to position the cursor.
If true, require the given key and data to match the key and data
in the database exactly. If false, position the cursor at the
smallest data value which is greater than or equal to the value
provided by (as determined by the
comparison function).
True if the cursor was positioned successfully, false otherwise.
Move the cursor to the specified key/data pair of the database, and
store that key/data pair and as many ensuing key/data pairs that can
fit in a buffer the size of in
. The cursor is positioned to a
key/data pair if both the key and data match the values provided on
the key and data parameters.
The key/data pair at which to position the cursor.
If true, require the given key and data to match the key and data
in the database exactly. If false, position the cursor at the
smallest data value which is greater than or equal to the value
provided by (as determined by the
comparison function).
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
True if the cursor was positioned successfully, false otherwise.
Move the cursor to the specified key/data pair of the database, and
store that key/data pair and as many ensuing key/data pairs that can
fit in a buffer the size of one database page in
. The cursor is positioned to a
key/data pair if both the key and data match the values provided on
the key and data parameters.
The key/data pair at which to position the cursor.
If true, require the given key and data to match the key and data
in the database exactly. If false, position the cursor at the
smallest data value which is greater than or equal to the value
provided by (as determined by the
comparison function).
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Move the cursor to the specified key/data pair of the database, and
store that key/data pair and as many ensuing key/data pairs that can
fit in a buffer the size of in
. The cursor is positioned to a
key/data pair if both the key and data match the values provided on
the key and data parameters.
The key/data pair at which to position the cursor.
If true, require the given key and data to match the key and data
in the database exactly. If false, position the cursor at the
smallest data value which is greater than or equal to the value
provided by (as determined by the
comparison function).
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNext is identical to
. Otherwise, move the cursor to the next
key/data pair of the database, and store that pair in
. In the presence of duplicate key values, the
value of Current.Key may not change.
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNext is identical to
. Otherwise, move the cursor to
the next key/data pair of the database, and store that pair in
. In the presence of duplicate key values, the
value of Current.Key may not change.
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextMultiple is identical
to . Otherwise, move the cursor to
the next key/data pair of the database, and store that pair and as
many duplicate data items that can fit in a buffer the size of one
database page in . In the presence of
duplicate key values, the value of
CurrentMultiple.Key may not
change.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextMultiple is identical
to . Otherwise, move the cursor
to the next key/data pair of the database, and store that pair and
as many duplicate data items that can fit in a buffer the size of
in . In
the presence of duplicate key values, the value of
CurrentMultiple.Key may not
change.
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextMultiple is identical
to . Otherwise, move the
cursor to the next key/data pair of the database, and store that
pair and as many duplicate data items that can fit in a buffer the
size of one database page in . In the
presence of duplicate key values, the value of
CurrentMultiple.Key may not
change.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextMultiple is identical
to . Otherwise,
move the cursor to the next key/data pair of the database, and store
that pair and as many duplicate data items that can fit in a buffer
the size of in
. In the presence of duplicate key
values, the value of
CurrentMultiple.Key may not
change.
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextMultipleKey is
identical to . Otherwise, move
the cursor to the next key/data pair of the database, and store that
pair and as many ensuing key/data pairs that can fit in a buffer the
size of one database page in . In
the presence of duplicate key values, the keys of
may not change.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextMultipleKey is
identical to . Otherwise,
move the cursor to the next key/data pair of the database, and store
that pair and as many ensuing key/data pairs that can fit in a
buffer the size of in
. In the presence of duplicate key
values, the keys of may not change.
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextMultipleKey is
identical to .
Otherwise, move the cursor to the next key/data pair of the
database, and store that pair and as many ensuing key/data pairs
that can fit in a buffer the size of one database page in
. In the presence of duplicate key
values, the keys of may not change.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextMultipleKey is
identical to .
Otherwise, move the cursor to the next key/data pair of the
database, and store that pair and as many ensuing key/data pairs
that can fit in a buffer the size of
in . In the presence of duplicate
key values, the keys of may not
change.
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the next key/data pair of the database is a duplicate data record
for the current key/data pair, move the cursor to the next key/data
pair in the database, and store that pair in .
MoveNextDuplicate will return false if the next key/data pair of the
database is not a duplicate data record for the current key/data
pair.
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
If the next key/data pair of the database is a duplicate data record
for the current key/data pair, move the cursor to the next key/data
pair in the database, and store that pair in .
MoveNextDuplicate will return false if the next key/data pair of the
database is not a duplicate data record for the current key/data
pair.
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the next key/data pair of the database is a duplicate data record
for the current key/data pair, move the cursor to the next key/data
pair in the database, and store that pair and as many duplicate data
items that can fit in a buffer the size of one database page in
. MoveNextDuplicateMultiple will return
false if the next key/data pair of the database is not a duplicate
data record for the current key/data pair.
True if the cursor was positioned successfully, false otherwise.
If the next key/data pair of the database is a duplicate data record
for the current key/data pair, then move cursor to the next key/data
pair in the database, and store that pair and as many duplicate data
items that can fit in a buffer the size of
in .
MoveNextDuplicateMultiple will return false if the next key/data
pair of the database is not a duplicate data record for the current
key/data pair.
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
True if the cursor was positioned successfully, false otherwise.
If the next key/data pair of the database is a duplicate data record
for the current key/data pair, move the cursor to the next key/data
pair in the database, and store that pair and as many duplicate data
items that can fit in a buffer the size of one database page in
. MoveNextDuplicateMultiple will return
false if the next key/data pair of the database is not a duplicate
data record for the current key/data pair.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the next key/data pair of the database is a duplicate data record
for the current key/data pair, move the cursor to the next key/data
pair in the database, and store that pair and as many duplicate data
items that can fit in a buffer the size of
in .
MoveNextDuplicateMultiple will return false if the next key/data
pair of the database is not a duplicate data record for the current
key/data pair.
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the next key/data pair of the database is a duplicate data record
for the current key/data pair, move the cursor to the next key/data
pair in the database, and store that pair and as many duplicate data
items that can fit in a buffer the size of one database page in
. MoveNextDuplicateMultipleKey will
return false if the next key/data pair of the database is not a
duplicate data record for the current key/data pair.
True if the cursor was positioned successfully, false otherwise.
If the next key/data pair of the database is a duplicate data record
for the current key/data pair, move the cursor to the next key/data
pair in the database, and store that pair and as many duplicate data
items that can fit in a buffer the size of
in .
MoveNextDuplicateMultipleKey will return false if the next key/data
pair of the database is not a duplicate data record for the current
key/data pair.
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
True if the cursor was positioned successfully, false otherwise.
If the next key/data pair of the database is a duplicate data record
for the current key/data pair, move the cursor to the next key/data
pair in the database, and store that pair and as many duplicate data
items that can fit in a buffer the size of one database page in
. MoveNextDuplicateMultipleKey will
return false if the next key/data pair of the database is not a
duplicate data record for the current key/data pair.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the next key/data pair of the database is a duplicate data record
for the current key/data pair, move the cursor to the next key/data
pair in the database, and store that pair and as many duplicate data
items that can fit in a buffer the size of
in .
MoveNextDuplicateMultipleKey will return false if the next key/data
pair of the database is not a duplicate data record for the current
key/data pair.
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextUnique is identical to
. Otherwise, move the cursor to the next
non-duplicate key in the database, and store that key and associated
datum in . MoveNextUnique will return false if
no non-duplicate key/data pairs exist after the cursor position in
the database.
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextUnique is identical to
. Otherwise, move the cursor to
the next non-duplicate key in the database, and store that key and
associated datum in . MoveNextUnique will
return false if no non-duplicate key/data pairs exist after the
cursor position in the database.
If the database is a Queue or Recno database, MoveNextUnique will
ignore any keys that exist but were never explicitly created by the
application, or those that were created and later deleted.
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextUniqueMultiple is
identical to . Otherwise, move the
cursor to the next non-duplicate key in the database, and store that
key and associated datum and as many duplicate data items that can
fit in a buffer the size of one database page in
. MoveNextUniqueMultiple will return
false if no non-duplicate key/data pairs exist after the cursor
position in the database.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextUniqueMultiple is
identical to . Otherwise, move
the cursor to the next non-duplicate key in the database, and store
that key and associated datum and as many duplicate data items that
can fit in a buffer the size of in
. MoveNextUniqueMultiple will return
false if no non-duplicate key/data pairs exist after the cursor
position in the database.
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextUniqueMultiple is
identical to .
Otherwise, move the cursor to the next non-duplicate key in the
database, and store that key and associated datum and as many
duplicate data items that can fit in a buffer the size of one
database page in .
MoveNextUniqueMultiple will return false if no non-duplicate
key/data pairs exist after the cursor position in the database.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextUniqueMultiple is
identical to .
Otherwise, move the cursor to the next non-duplicate key in the
database, and store that key and associated datum and as many
duplicate data items that can fit in a buffer the size of
in .
MoveNextUniqueMultiple will return false if no non-duplicate
key/data pairs exist after the cursor position in the database.
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextUniqueMultipleKey is
identical to . Otherwise, move
the cursor to the next non-duplicate key in the database, and store
that key and associated datum and as many ensuing key/data pairs
that can fit in a buffer the size of one database page in
. MoveNextUniqueMultipleKey will
return false if no non-duplicate key/data pairs exist after the
cursor position in the database.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextUniqueMultipleKey is
identical to . Otherwise,
move the cursor to the next non-duplicate key in the database, and
store that key and associated datum and as many ensuing key/data
pairs that can fit in a buffer the size of
in .
MoveNextUniqueMultipleKey will return false if no non-duplicate
key/data pairs exist after the cursor position in the database.
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextUniqueMultipleKey is
identical to .
Otherwise, move the cursor to the next non-duplicate key in the
database, and store that key and associated datum and as many
ensuing key/data pairs that can fit in a buffer the size of one
database page in .
MoveNextUniqueMultipleKey will return false if no non-duplicate
key/data pairs exist after the cursor position in the database.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextUniqueMultipleKey is
identical to .
Otherwise, move the cursor to the next non-duplicate key in the
database, and store that key and associated datum and as many
ensuing key/data pairs that can fit in a buffer the size of
in .
MoveNextUniqueMultipleKey will return false if no non-duplicate
key/data pairs exist after the cursor position in the database.
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MovePrev is identical to
. Otherwise, move the cursor to the previous
key/data pair of the database, and store that pair in
. In the presence of duplicate key values, the
value of Current.Key may not change.
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MovePrev is identical to
. Otherwise, move the cursor to
the previous key/data pair of the database, and store that pair in
. In the presence of duplicate key values, the
value of Current.Key may not change.
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the previous key/data pair of the database is a duplicate data
record for the current key/data pair, the cursor is moved to the
previous key/data pair of the database, and that pair is stored in
. MovePrevDuplicate will return false if the
previous key/data pair of the database is not a duplicate data
record for the current key/data pair.
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
If the previous key/data pair of the database is a duplicate data
record for the current key/data pair, the cursor is moved to the
previous key/data pair of the database, and that pair is stored in
. MovePrevDuplicate will return false if the
previous key/data pair of the database is not a duplicate data
record for the current key/data pair.
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MovePrevUnique is identical to
. Otherwise, move the cursor to the previous
non-duplicate key in the database, and store that key and associated
datum in . MovePrevUnique will return false if
no non-duplicate key/data pairs exist after the cursor position in
the database.
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MovePrevUnique is identical to
. Otherwise, move the cursor to
the previous non-duplicate key in the database, and store that key
and associated datum in . MovePrevUnique will
return false if no non-duplicate key/data pairs exist after the
cursor position in the database.
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Overwrite the data of the key/data pair to which the cursor refers
with the specified data item.
Store the key/data pair to which the cursor refers in
.
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
Store the key/data pair to which the cursor refers in
.
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Store the key/data pair to which the cursor refers and as many
duplicate data items that can fit in a buffer the size of one
database page in .
True if the cursor was positioned successfully, false otherwise.
Store the key/data pair to which the cursor refers and as many
duplicate data items that can fit in a buffer the size of
in .
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
True if the cursor was positioned successfully, false otherwise.
Store the key/data pair to which the cursor refers and as many
duplicate data items that can fit in a buffer the size of one
database page in .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Store the key/data pair to which the cursor refers and as many
duplicate data items that can fit in a buffer the size of
in .
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Store the key/data pair to which the cursor refers and as many
ensuing key/data pairs that can fit in a buffer the size of one
database page in .
True if the cursor was positioned successfully, false otherwise.
Store the key/data pair to which the cursor refers and as many
ensuing key/data pairs that can fit in a buffer the size of
in .
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
True if the cursor was positioned successfully, false otherwise.
Store the key/data pair to which the cursor refers and as many
ensuing key/data pairs that can fit in a buffer the size of one
database page in .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Store the key/data pair to which the cursor refers and as many
ensuing key/data pairs that can fit in a buffer the size of
in .
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
The key/data pair at which the cursor currently points.
Only one of , and
will ever be non-empty.
The key and multiple data items at which the cursor currently
points.
Only one of , and
will ever be non-empty.
The multiple key and data items at which the cursor currently
points.
Only one of , and
will ever be non-empty.
The cache priority for pages referenced by the cursor.
The priority of a page biases the replacement algorithm to be more
or less likely to discard a page when space is needed in the buffer
pool. The bias is temporary, and pages will eventually be discarded
if they are not referenced again. The setting is only advisory, and
does not guarantee pages will be treated in a specific way.
Specifies where to place duplicate data elements of the key to which
the cursor refers.
The new element appears immediately after the current cursor
position.
The new element appears immediately before the current cursor
position.
The new element appears as the first of the data items for the
given key
The new element appears as the last of the data items for the
given key
Create a new cursor that uses the same transaction and locker ID as
the original cursor.
This is useful when an application is using locking and requires two
or more cursors in the same thread of control.
If true, the newly created cursor is initialized to refer to the
same position in the database as the original cursor (if any) and
hold the same locks (if any). If false, or the original cursor does
not hold a database position and locks, the created cursor is
uninitialized and will behave like a cursor newly created by
.
A newly created cursor
Insert the data element as a duplicate element of the key to which
the cursor refers.
The data element to insert
Specify whether to insert the data item immediately before or
immediately after the cursor's current position.
Insert the specified key/data pair into the database, unless a
key/data pair comparing equally to it already exists in the
database.
The key/data pair to be inserted
Thrown if a matching key/data pair already exists in the database.
Insert the specified key/data pair into the database.
The key/data pair to be inserted
If the key already exists in the database and no duplicate sort
function has been specified, specify whether the inserted data item
is added as the first or the last of the data items for that key.
A class representing Berkeley DB transactions
Calling ,
or
will release the resources held by
the created object.
Transactions may only span threads if they do so serially; that is,
each transaction must be active in only a single thread of control
at a time. This restriction holds for parents of nested transactions
as well; no two children may be concurrently active in more than one
thread of control at any one time.
Cursors may not span transactions; that is, each cursor must be
opened and closed within a single transaction.
A parent transaction may not issue any Berkeley DB operations —
except for ,
and
— while it has active child transactions (child transactions that
have not yet been committed or aborted).
The size of the global transaction ID
Cause an abnormal termination of the transaction.
Before Abort returns, any locks held by the transaction will have
been released.
In the case of nested transactions, aborting a parent transaction
causes all children (unresolved or not) of the parent transaction to
be aborted.
All cursors opened within the transaction must be closed before the
transaction is aborted.
End the transaction.
In the case of nested transactions, if the transaction is a parent
transaction, committing the parent transaction causes all unresolved
children of the parent to be committed. In the case of nested
transactions, if the transaction is a child transaction, its locks
are not released, but are acquired by its parent. Although the
commit of the child transaction will succeed, the actual resolution
of the child transaction is postponed until the parent transaction
is committed or aborted; that is, if its parent transaction commits,
it will be committed; and if its parent transaction aborts, it will
be aborted.
All cursors opened within the transaction must be closed before the
transaction is committed.
End the transaction.
Synchronously flushing the log is the default for Berkeley DB
environments unless
was specified.
Synchronous log flushing may also be set or unset for a single
transaction using
. The
value of overrides both of those
settings.
If true, synchronously flush the log.
Free up all the per-process resources associated with the specified
Transaction instance, neither committing nor aborting the
transaction.
This call may be used only after calls to
when there are multiple
global transaction managers recovering transactions in a single
Berkeley DB environment. Any transactions returned by
that are not handled by
the current global transaction manager should be discarded using
Discard.
Initiate the beginning of a two-phase commit.
In a distributed transaction environment, Berkeley DB can be used as
a local transaction manager. In this case, the distributed
transaction manager must send prepare messages to each local
manager. The local manager must then call Prepare and await its
successful return before responding to the distributed transaction
manager. Only after the distributed transaction manager receives
successful responses from all of its prepare messages should it
issue any commit messages.
In the case of nested transactions, preparing the parent causes all
unresolved children of the parent transaction to be committed. Child
transactions should never be explicitly prepared. Their fate will be
resolved along with their parent's during global recovery.
The global transaction ID by which this transaction will be known.
This global transaction ID will be returned in calls to
telling the
application which global transactions must be resolved.
Set the timeout value for locks for this transaction.
Timeouts are checked whenever a thread of control blocks on a lock
or when deadlock detection is performed. This timeout is for any
single lock request. As timeouts are only checked when the lock
request first blocks or when deadlock detection is performed, the
accuracy of the timeout depends on how often deadlock detection is
performed.
Timeout values may be specified for the database environment as a
whole. See for more
information.
An unsigned 32-bit number of microseconds, limiting the maximum
timeout to roughly 71 minutes. A value of 0 disables timeouts for
the transaction.
Set the timeout value for transactions for this transaction.
Timeouts are checked whenever a thread of control blocks on a lock
or when deadlock detection is performed. This timeout is for the
life of the transaction. As timeouts are only checked when the lock
request first blocks or when deadlock detection is performed, the
accuracy of the timeout depends on how often deadlock detection is
performed.
Timeout values may be specified for the database environment as a
whole. See for more
information.
An unsigned 32-bit number of microseconds, limiting the maximum
timeout to roughly 71 minutes. A value of 0 disables timeouts for
the transaction.
The unique transaction id associated with this transaction.
The transaction's name. The name is returned by
and displayed by
.
If the database environment has been configured for logging and the
Berkeley DB library was built in Debug mode (or with DIAGNOSTIC
defined), a debugging log record is written including the
transaction ID and the name.
A class representing a QueueDatabase. The Queue format supports fast
access to fixed-length records accessed sequentially or by logical
record number.
Instantiate a new QueueDatabase object and open the database
represented by .
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
A new, open database object
Instantiate a new QueueDatabase object and open the database
represented by .
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
Append the data item to the end of the database.
The data item to store in the database
The record number allocated to the record
Append the data item to the end of the database.
There is a minor behavioral difference between
and
. If a transaction enclosing an
Append operation aborts, the record number may be reallocated in a
subsequent operation, but it will
not be reallocated in a subsequent
operation.
The data item to store in the database
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The record number allocated to the record
Return the record number and data from the available record closest
to the head of the queue, and delete the record.
If true and the Queue database is empty, the thread of control will
wait until there is data in the queue before returning.
If lock or transaction timeouts have been specified, a
may be thrown. This failure,
by itself, does not require the enclosing transaction be aborted.
A whose Key
parameter is the record number and whose Value parameter is the
retrieved data.
Return the record number and data from the available record closest
to the head of the queue, and delete the record.
If true and the Queue database is empty, the thread of control will
wait until there is data in the queue before returning.
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
If lock or transaction timeouts have been specified, a
may be thrown. This failure,
by itself, does not require the enclosing transaction be aborted.
A whose Key
parameter is the record number and whose Value parameter is the
retrieved data.
Return the record number and data from the available record closest
to the head of the queue, and delete the record.
If true and the Queue database is empty, the thread of control will
wait until there is data in the queue before returning.
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The locking behavior to use.
If lock or transaction timeouts have been specified, a
may be thrown. This failure,
by itself, does not require the enclosing transaction be aborted.
A whose Key
parameter is the record number and whose Value parameter is the
retrieved data.
Return the database statistical information which does not require
traversal of the database.
The database statistical information which does not require
traversal of the database.
Return the database statistical information which does not require
traversal of the database.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The database statistical information which does not require
traversal of the database.
Return the database statistical information which does not require
traversal of the database.
Among other things, this method makes it possible for applications
to request key and record counts without incurring the performance
penalty of traversing the entire database.
The statistical information is described by the
, ,
, and classes.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The level of isolation for database reads.
will be silently ignored for
databases which did not specify
.
The database statistical information which does not require
traversal of the database.
Return the database statistical information for this database.
Database statistical information.
Return the database statistical information for this database.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
Database statistical information.
Return the database statistical information for this database.
The statistical information is described by
.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The level of isolation for database reads.
will be silently ignored for
databases which did not specify
.
Database statistical information.
The size of the extents used to hold pages in a
, specified as a number of pages.
If true, modify the operation of
to return key/data pairs in order. That is, they will always return
the key/data item from the head of the queue.
The length of records in the database.
The padding character for short, fixed-length records.
A class representing a transaction that must be resolved by the
application following .
The transaction which must be committed, aborted or discarded.
The global transaction ID for the transaction. The global
transaction ID is the one specified when the transaction was
prepared. The application is responsible for ensuring uniqueness
among global transaction IDs.
Statistical information about a file in the memory pool
File name.
Pages from mapped files.
Pages created in the cache.
Pages found in the cache.
Pages not found in the cache.
Pages read in.
Page size.
Pages written out.
A class representing configuration parameters for a
's locking subsystem.
If non-null, the deadlock detector is to be run whenever a lock
conflict occurs, lock request(s) should be rejected according to the
specified policy.
As transactions acquire locks on behalf of a single locker ID,
rejecting a lock request associated with a transaction normally
requires the transaction be aborted.
The locking conflicts matrix.
If Conflicts is never set, a standard conflicts array is used; see
Standard Lock Modes in the Programmer's Reference Guide for more
information.
Conflicts parameter is an nmodes by nmodes array. A non-0 value for
the array element indicates that requested_mode and held_mode
conflict:
conflicts[requested_mode][held_mode]
The not-granted mode must be represented by 0.
If the database environment already exists when
is called, the value of
Conflicts will be ignored.
The maximum number of simultaneous locking entities supported by the
Berkeley DB environment
This value is used by to
estimate how much space to allocate for various lock-table data
structures. The default value is 1000 lockers. For specific
information on configuring the size of the lock subsystem, see
Configuring locking: sizing the system in the Programmer's Reference
Guide.
If the database environment already exists when
is called, the value of
MaxLockers will be ignored.
The maximum number of locks supported by the Berkeley DB
environment.
This value is used by to
estimate how much space to allocate for various lock-table data
structures. The default value is 1000 lockers. For specific
information on configuring the size of the lock subsystem, see
Configuring locking: sizing the system in the Programmer's Reference
Guide.
If the database environment already exists when
is called, the value of
MaxLocks will be ignored.
The maximum number of locked objects supported by the Berkeley DB
environment.
This value is used by to
estimate how much space to allocate for various lock-table data
structures. The default value is 1000 lockers. For specific
information on configuring the size of the lock subsystem, see
Configuring locking: sizing the system in the Programmer's Reference
Guide.
If the database environment already exists when
is called, the value of
MaxObjects will be ignored.
The number of lock table partitions in the Berkeley DB environment.
The default value is 10 times the number of CPUs on the system if
there is more than one CPU. Increasing the number of partitions can
provide for greater throughput on a system with multiple CPUs and
more than one thread contending for the lock manager. On single
processor systems more than one partition may increase the overhead
of the lock manager. Systems often report threading contexts as
CPUs. If your system does this, set the number of partitions to 1 to
get optimal performance.
If the database environment already exists when
is called, the value of
Partitions will be ignored.
A class representing configuration parameters for
Policy for duplicate data items in the database; that is, insertion
when the key of the key/data pair being inserted already exists in
the database will be successful.
The ordering of duplicates in the database for
is determined by the order
of insertion, unless the ordering is otherwise specified by use of a
cursor operation or a duplicate sort function. The ordering of
duplicates in the database for
is determined by the
duplicate comparison function. If the application does not specify a
comparison function using
, a default lexical
comparison will be used.
is preferred to
for performance reasons.
should only be used by
applications wanting to order duplicate data items manually.
If the database already exists, the value of Duplicates must be the
same as the existing database or an error will be returned.
It is an error to specify and
anything other than .
Turn reverse splitting in the Btree on or off.
As pages are emptied in a database, the Berkeley DB Btree
implementation attempts to coalesce empty pages into higher-level
pages in order to keep the database as small as possible and
minimize search time. This can hurt performance in applications with
cyclical data demands; that is, applications where the database
grows and shrinks repeatedly. For example, because Berkeley DB does
page-level locking, the maximum level of concurrency in a database
of two pages is far smaller than that in a database of 100 pages, so
a database that has shrunk to a minimal size can cause severe
deadlocking when a new cycle of data insertion begins.
If true, support retrieval from the Btree using record numbers.
Logical record numbers in Btree databases are mutable in the face of
record insertion or deletion. See
for further discussion.
Maintaining record counts within a Btree introduces a serious point
of contention, namely the page locations where the record counts are
stored. In addition, the entire database must be locked during both
insertions and deletions, effectively single-threading the database
for those operations. Specifying UseRecordNumbers can result in
serious performance degradation for some applications and data sets.
It is an error to specify and
anything other than .
If the database already exists, the value of UseRecordNumbers must
be the same as the existing database or an error will be returned.
The policy for how to handle database creation.
If the database does not already exist and
is set,
will fail.
The Btree key comparison function.
The comparison function is called whenever it is necessary to
compare a key specified by the application with a key currently
stored in the tree.
If no comparison function is specified, the keys are compared
lexically, with shorter keys collating before longer keys.
If the database already exists, the comparison function must be the
same as that historically used to create the database or corruption
can occur.
The Btree prefix function.
The prefix function is used to determine the amount by which keys
stored on the Btree internal pages can be safely truncated without
losing their uniqueness. See the Btree prefix comparison section of
the Berkeley DB Reference Guide for more details about how this
works. The usefulness of this is data-dependent, but can produce
significantly reduced tree sizes and search times in some data sets.
If no prefix function or key comparison function is specified by the
application, a default lexical comparison function is used as the
prefix function. If no prefix function is specified and
is specified, no prefix function is
used. It is an error to specify a prefix function without also
specifying .
If the database already exists, the prefix function must be the
same as that historically used to create the database or corruption
can occur.
The duplicate data item comparison function.
The comparison function is called whenever it is necessary to
compare a data item specified by the application with a data item
currently stored in the database. Setting DuplicateCompare implies
setting to
.
If no comparison function is specified, the data items are compared
lexically, with shorter data items collating before longer data
items.
If the database already exists when
is called, the
delegate must be the same as that historically used to create the
database or corruption can occur.
Enable compression of the key/data pairs stored in the database,
using the default compression and decompression functions.
The default functions perform prefix compression on keys, and prefix
compression on data items for duplicate keys.
Enable compression of the key/data pairs stored in the database,
using the specified compression and decompression functions.
The compression function
The decompression function
Create a new BTreeDatabaseConfig object
The compression function used to store key/data pairs in the
database.
The decompression function used to retrieve key/data pairs from the
database.
The minimum number of key/data pairs intended to be stored on any
single Btree leaf page.
This value is used to determine if key or data items will be stored
on overflow pages instead of Btree leaf pages. For more information
on the specific algorithm used, see the Berkeley DB Reference Guide.
The value specified must be at least 2; if not explicitly set, a
value of 2 is used.
If the database already exists, MinKeysPerPage will be ignored.
Configuration properties for a Sequence
The policy for how to handle sequence creation.
If the sequence does not already exist and
is set, the Sequence constructor
will fail.
If true, the object returned by the Sequence constructor will be
free-threaded; that is, usable by multiple threads within a single
address space. Note that if multiple threads create multiple
sequences using the same , that
database must have also been opened free-threaded.
An open database which holds the persistent data for the sequence.
The database may be of any type, but must not have been configured
to support duplicate data items.
If was opened in a transaction,
calling Get may result in changes to the sequence object; these
changes will be automatically committed in a transaction internal to
the Berkeley DB library. If the thread of control calling Get has an
active transaction, which holds locks on the same database as the
one in which the sequence object is stored, it is possible for a
thread of control calling Get to self-deadlock because the active
transaction's locks conflict with the internal transaction's locks.
For this reason, it is often preferable for sequence objects to be
stored in their own database.
The record in the database that stores the persistent sequence data.
If true, the sequence should wrap around when it is incremented
(decremented) past the specified maximum (minimum) value.
Set the minimum and maximum values in the sequence.
The maximum value in the sequence.
The minimum value in the sequence.
The initial value for a sequence.
If true, the sequence will be decremented.
If true, the sequence will be incremented. This is the default.
The number of elements cached by a sequence handle.
The minimum value in the sequence.
The maximum value in the sequence.
A class representing configuration parameters for
Policy for duplicate data items in the database; that is, insertion
when the key of the key/data pair being inserted already exists in
the database will be successful.
The ordering of duplicates in the database for
is determined by the order
of insertion, unless the ordering is otherwise specified by use of a
cursor operation or a duplicate sort function. The ordering of
duplicates in the database for
is determined by the
duplicate comparison function. If the application does not specify a
comparison function using
, a default lexical
comparison will be used.
is preferred to
for performance reasons.
should only be used by
applications wanting to order duplicate data items manually.
If the database already exists, the value of Duplicates must be the
same as the existing database or an error will be returned.
The policy for how to handle database creation.
If the database does not already exist and
is set,
will fail.
The Secondary Hash key comparison function.
The comparison function is called whenever it is necessary to
compare a key specified by the application with a key currently
stored in the tree.
If no comparison function is specified, the keys are compared
lexically, with shorter keys collating before longer keys.
If the database already exists, the comparison function must be the
same as that historically used to create the database or corruption
can occur.
A user-defined hash function; if no hash function is specified, a
default hash function is used.
Because no hash function performs equally well on all possible data,
the user may find that the built-in hash function performs poorly
with a particular data set.
If the database already exists, HashFunction must be the same as
that historically used to create the database or corruption can
occur.
The duplicate data item comparison function.
The comparison function is called whenever it is necessary to
compare a data item specified by the application with a data item
currently stored in the database. Setting DuplicateCompare implies
setting to
.
If no comparison function is specified, the data items are compared
lexically, with shorter data items collating before longer data
items.
If the database already exists when
is called, the delegate
must be the same as that historically used to create the database or
corruption can occur.
Instantiate a new SecondaryHashDatabaseConfig object
The desired density within the hash table. If no value is specified,
the fill factor will be selected dynamically as pages are filled.
The density is an approximation of the number of keys allowed to
accumulate in any one bucket, determining when the hash table grows
or shrinks. If you know the average sizes of the keys and data in
your data set, setting the fill factor can enhance performance. A
reasonable rule computing fill factor is to set it to the following:
(pagesize - 32) / (average_key_size + average_data_size + 8)
If the database already exists, this setting will be ignored.
An estimate of the final size of the hash table.
In order for the estimate to be used when creating the database,
must also be set. If the estimate or fill
factor are not set or are set too low, hash tables will still expand
gracefully as keys are entered, although a slight performance
degradation may be noticed.
If the database already exists, this setting will be ignored.
Statistical information about the memory pool subsystem
Total cache size and number of regions
Maximum number of regions.
Maximum file size for mmap.
Maximum number of open fd's.
Maximum buffers to write.
Sleep after writing max buffers.
Total number of pages.
Pages from mapped files.
Pages found in the cache.
Pages not found in the cache.
Pages created in the cache.
Pages read in.
Pages written out.
Clean pages forced from the cache.
Dirty pages forced from the cache.
Pages written by memp_trickle.
Clean pages.
Dirty pages.
Number of hash buckets.
Assumed page size.
Total hash chain searches.
Longest hash chain searched.
Total hash entries searched.
Hash lock granted with nowait.
Hash lock granted after wait.
Max hash lock granted with nowait.
Max hash lock granted after wait.
Region lock granted with nowait.
Region lock granted after wait.
Buffers frozen.
Buffers thawed.
Frozen buffers freed.
Number of page allocations.
Buckets checked during allocation.
Max checked during allocation.
Pages checked during allocation.
Max checked during allocation.
Thread waited on buffer I/O.
Number of times sync interrupted.
Region size.
Stats for files open in the memory pool
Statistical information about a BTreeDatabase
Duplicate pages.
Bytes free in duplicate pages.
Empty pages.
Pages on the free list.
Internal pages.
Bytes free in internal pages.
Leaf pages.
Bytes free in leaf pages.
Tree levels.
Magic number.
Metadata flags.
Minkey value.
Number of data items.
Number of unique keys.
Page count.
Overflow pages.
Bytes free in overflow pages.
Page size.
Version number.
A class for traversing the records of a
Create a new cursor that uses the same transaction and locker ID as
the original cursor.
This is useful when an application is using locking and requires two
or more cursors in the same thread of control.
If true, the newly created cursor is initialized to refer to the
same position in the database as the original cursor (if any) and
hold the same locks (if any). If false, or the original cursor does
not hold a database position and locks, the created cursor is
uninitialized and will behave like a cursor newly created by
.
A newly created cursor
Insert the data element as a duplicate element of the key to which
the cursor refers.
The data element to insert
Specify whether to insert the data item immediately before or
immediately after the cursor's current position.
A class representing a SecondaryQueueDatabase. The Queue format supports
fast access to fixed-length records accessed sequentially or by logical
record number.
Instantiate a new SecondaryQueueDatabase object, open the
database represented by and associate
the database with the
primary index.
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
A new, open database object
Instantiate a new SecondaryQueueDatabase object, open the
database represented by and associate
the database with the
primary index.
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
The size of the extents used to hold pages in a
, specified as a number of pages.
The length of records in the database.
The padding character for short, fixed-length records.
A class representing a key or data item in a Berkeley DB database
Create a new, empty DatabaseEntry object.
Create a new DatabaseEntry object, with the specified data
The new object's
Release the resources held by the underlying C library.
The byte string stored in or retrieved from a database
The AckPolicy class specifies how master and client sites will handle
acknowledgment of replication messages which are necessary for
"permanent" records. The current implementation requires all sites in a
replication group configure the same acknowledgement policy.
The master should wait until all replication clients have
acknowledged each permanent replication message.
The master should wait until all electable peers have acknowledged
each permanent replication message (where "electable peer" means a
client capable of being subsequently elected master of the
replication group).
The master should not wait for any client replication message
acknowledgments.
The master should wait until at least one client site has
acknowledged each permanent replication message.
The master should wait until at least one electable peer has
acknowledged each permanent replication message (where "electable
peer" means a client capable of being subsequently elected master of
the replication group).
The master should wait until it has received acknowledgements from
the minimum number of electable peers sufficient to ensure that the
effect of the permanent record remains durable if an election is
held (where "electable peer" means a client capable of being
subsequently elected master of the replication group). This is the
default acknowledgement policy.
A class representing configuration parameters for a
.
The degree of isolation for this transaction
If true and a lock is unavailable for any Berkeley DB operation
performed in the context of a transaction, cause the operation to
throw a
(or if configured with
).
This setting overrides the behavior specified by
.
If true, this transaction will execute with snapshot isolation.
For databases with set, data
values will be read as they are when the transaction begins, without
taking read locks. Silently ignored for operations on databases with
not set on the underlying
database (read locks are acquired).
A will be thrown from update
operations if a snapshot transaction attempts to update data which
was modified after the snapshot transaction read it.
Log sync behavior on transaction commit or prepare.
This setting overrides the behavior specified by
and
.
Instantiate a new TransactionConfig object
The timeout value for locks for the transaction.
Timeouts are checked whenever a thread of control blocks on a lock
or when deadlock detection is performed. This timeout is for any
single lock request. As timeouts are only checked when the lock
request first blocks or when deadlock detection is performed, the
accuracy of the timeout depends on how often deadlock detection is
performed.
Timeout values may be specified for the database environment as a
whole. See for
more information.
The transaction's name. The name is returned by
and displayed by
.
If the database environment has been configured for logging and the
Berkeley DB library was built in Debug mode (or with DIAGNOSTIC
defined), a debugging log record is written including the
transaction ID and the name.
The timeout value for locks for the transaction.
Timeouts are checked whenever a thread of control blocks on a lock
or when deadlock detection is performed. This timeout is for the
life of the transaction. As timeouts are only checked when the lock
request first blocks or when deadlock detection is performed, the
accuracy of the timeout depends on how often deadlock detection is
performed.
Timeout values may be specified for the database environment as a
whole. See for
more information.
Specifies the log flushing behavior on transaction commit
Use Berkeley DB's default behavior of syncing the log on commit.
Berkeley DB will not write or synchronously flush the log on
transaction commit or prepare.
This means the transaction will exhibit the ACI (atomicity,
consistency, and isolation) properties, but not D (durability);
that is, database integrity will be maintained but it is
possible that this transaction may be undone during recovery.
Berkeley DB will write, but will not synchronously flush, the
log on transaction commit or prepare.
This means that transactions exhibit the ACI (atomicity,
consistency, and isolation) properties, but not D (durability);
that is, database integrity will be maintained, but if the
system fails, it is possible some number of the most recently
committed transactions may be undone during recovery. The number
of transactions at risk is governed by how often the system
flushes dirty buffers to disk and how often the log is
checkpointed.
For consistent behavior across the environment, all
objects opened in the
environment must either set WRITE_NOSYNC, or the
DB_TXN_WRITE_NOSYNC flag should be specified in the DB_CONFIG
configuration file.
Berkeley DB will synchronously flush the log on transaction
commit or prepare.
This means the transaction will exhibit all of the ACID
(atomicity, consistency, isolation, and durability) properties.
A class representing the return value of
.
The result of processing an incoming replication message.
The log sequence number of the permanent log message that could not
be written to disk if is
. The largest log
sequence number of the permanent records that are now written to
disk as a result of processing the message, if
is
. In all other cases the
value is undefined.
The result of processing an incoming replication message.
The replication group has more than one master.
The application should reconfigure itself as a client by calling
,
and then call for an election using
.
An unspecified error occurred.
An election is needed.
The application should call for an election using
.
A message cannot be processed.
This is an indication that a message is irrelevant to the
current replication state (for example, an old message from a
previous generation arrives and is processed late).
Processing a message resulted in the processing of records that
are permanent.
is the maximum LSN of the permanent
records stored.
A new master has been chosen but the client is unable to
synchronize with the new master.
Possibly because the client has been configured with
to turn off
automatic internal initialization.
The system received contact information from a new environment.
The rec parameter to
contains the
opaque data specified in the cdata parameter to
. The
application should take whatever action is needed to establish a
communication channel with this new environment.
A message carrying a DB_REP_PERMANENT flag was processed
successfully, but was not written to disk.
is the LSN of this record. The application
should take whatever action is deemed necessary to retain its
recoverability characteristics.
Processing a message succeded.
A class to represent configuration settings for
and
.
Return the database key marking the end of the compaction operation
in a Btree or Recno database. This is generally the first key of the
page where the operation stopped.
If non-null, the starting point for compaction. Compaction will
start at the smallest key greater than or equal to
. If null, compaction will start at the
beginning of the database.
If non-null, the stopping point for compaction. Compaction will stop
at the page with the smallest key greater than
. If null, compaction will stop at the end of
the database.
If true, return pages to the filesystem when possible. If false,
pages emptied as a result of compaction will be placed on the free
list for re-use, but never returned to the filesystem.
Note that only pages at the end of a file can be returned to the
filesystem. Because of the one-pass nature of the compaction
algorithm, any unemptied page near the end of the file inhibits
returning pages to the file system. A repeated call to
or
with a low
may be used to return pages in this
case.
Create a new CompactConfig object
If non-zero, this provides the goal for filling pages, specified as
a percentage between 1 and 100. Any page not at or above this
percentage full will be considered for compaction. The default
behavior is to consider every page for compaction, regardless of its
page fill percentage.
If non-zero, compaction will complete after the specified number of
pages have been freed.
If non-zero, and no is specified, this
parameter identifies the lock timeout used for implicit
transactions, in microseconds.
A class representing the address of a replication site used by Berkeley
DB HA.
The site's host identification string, generally a TCP/IP host name.
The port number on which the site is receiving.
Instantiate a new, empty address
Instantiate a new address, parsing the host and port from the given
string
A string in host:port format
Instantiate a new address
The site's host identification string
The port number on which the site is receiving.
Statistical information about a QueueDatabase
Data pages.
Bytes free in data pages.
First not deleted record.
Magic number.
Metadata flags.
Next available record number.
Number of data items.
Number of unique keys.
Page size.
Pages per extent.
Fixed-length record length.
Fixed-length record pad.
Version number.
A class for representing compact operation statistics
If no parameter was specified, the
number of deadlocks which occurred.
The number of levels removed from the Btree or Recno database during
the compaction phase.
The number of database pages reviewed during the compaction phase.
The number of database pages freed during the compaction phase.
The number of database pages returned to the filesystem.
The database key marking the end of the compaction operation. This
is generally the first key of the page where the operation stopped
and is only non-null if was
true.
A class to represent the database byte order.
The host byte order of the machine where the Berkeley DB library was
compiled.
Little endian byte order
Big endian byte order
Convert from the integer constant used to represent byte order in
the C library to its corresponding ByteOrder object.
The C library constant
The ByteOrder object corresponding to the given constant
A class representing a BTreeDatabase. The Btree format is a
representation of a sorted, balanced tree structure.
Instantiate a new BTreeDatabase object and open the database
represented by .
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
A new, open database object
Instantiate a new BTreeDatabase object and open the database
represented by and
.
If both and
are null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe. If
is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
A new, open database object
Instantiate a new BTreeDatabase object and open the database
represented by .
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
Instantiate a new BTreeDatabase object and open the database
represented by and
.
If both and
are null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe. If
is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
Compact the database, and optionally return unused database pages to
the underlying filesystem.
If the operation occurs in a transactional database, the operation
will be implicitly transaction protected using multiple
transactions. These transactions will be periodically committed to
avoid locking large sections of the tree. Any deadlocks encountered
cause the compaction operation to be retried from the point of the
last transaction commit.
Compact configuration parameters
Compact operation statistics
Compact the database, and optionally return unused database pages to
the underlying filesystem.
If is non-null, then the operation is
performed using that transaction. In this event, large sections of
the tree may be locked during the course of the transaction.
If is null, but the operation occurs in a
transactional database, the operation will be implicitly transaction
protected using multiple transactions. These transactions will be
periodically committed to avoid locking large sections of the tree.
Any deadlocks encountered cause the compaction operation to be
retried from the point of the last transaction commit.
Compact configuration parameters
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
Compact operation statistics
Create a database cursor.
A newly created cursor
Create a database cursor with the given configuration.
The configuration properties for the cursor.
A newly created cursor
Create a transactionally protected database cursor.
The transaction context in which the cursor may be used.
A newly created cursor
Create a transactionally protected database cursor with the given
configuration.
The configuration properties for the cursor.
The transaction context in which the cursor may be used.
A newly created cursor
Return the database statistical information which does not require
traversal of the database.
The database statistical information which does not require
traversal of the database.
Return the database statistical information which does not require
traversal of the database.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The database statistical information which does not require
traversal of the database.
Return the database statistical information which does not require
traversal of the database.
Among other things, this method makes it possible for applications
to request key and record counts without incurring the performance
penalty of traversing the entire database.
The statistical information is described by the
, ,
, and classes.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The level of isolation for database reads.
will be silently ignored for
databases which did not specify
.
The database statistical information which does not require
traversal of the database.
Retrieve a specific numbered key/data pair from the database.
The record number of the record to be retrieved.
A whose Key
parameter is and whose Value parameter is the
retrieved data.
Retrieve a specific numbered key/data pair from the database.
The record number of the record to be retrieved.
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A whose Key
parameter is and whose Value parameter is the
retrieved data.
Retrieve a specific numbered key/data pair from the database.
The record number of the record to be retrieved.
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The locking behavior to use.
A whose Key
parameter is and whose Value parameter is the
retrieved data.
Return an estimate of the proportion of keys that are less than,
equal to, and greater than the specified key.
The key to search for
An estimate of the proportion of keys that are less than, equal to,
and greater than the specified key.
Return an estimate of the proportion of keys that are less than,
equal to, and greater than the specified key.
The key to search for
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
An estimate of the proportion of keys that are less than, equal to,
and greater than the specified key.
Store the key/data pair in the database only if it does not already
appear in the database.
The key to store in the database
The data item to store in the database
Store the key/data pair in the database only if it does not already
appear in the database.
The key to store in the database
The data item to store in the database
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
Return the database statistical information for this database.
Database statistical information.
Return the database statistical information for this database.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
Database statistical information.
Return the database statistical information for this database.
The statistical information is described by
.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The level of isolation for database reads.
will be silently ignored for
databases which did not specify
.
Database statistical information.
Return pages to the filesystem that are already free and at the end
of the file.
The number of database pages returned to the filesystem
Return pages to the filesystem that are already free and at the end
of the file.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The number of database pages returned to the filesystem
The Btree key comparison function. The comparison function is called
whenever it is necessary to compare a key specified by the
application with a key currently stored in the tree.
The compression function used to store key/data pairs in the
database.
The decompression function used to retrieve key/data pairs from the
database.
The duplicate data item comparison function.
Whether the insertion of duplicate data items in the database is
permitted, and whether duplicates items are sorted.
The minimum number of key/data pairs intended to be stored on any
single Btree leaf page.
The Btree prefix function. The prefix function is used to determine
the amount by which keys stored on the Btree internal pages can be
safely truncated without losing their uniqueness.
If true, this object supports retrieval from the Btree using record
numbers.
If false, empty pages will not be coalesced into higher-level pages.
A class representing configuration parameters for
Cause the logical record numbers to be mutable, and change as
records are added to and deleted from the database.
Using or to
create new records will cause the creation of multiple records if
the record number is more than one greater than the largest record
currently in the database. For example, creating record 28, when
record 25 was previously the last record in the database, will
create records 26 and 27 as well as 28. Attempts to retrieve records
that were created in this manner will throw a
.
If a created record is not at the end of the database, all records
following the new record will be automatically renumbered upward by
one. For example, the creation of a new record numbered 8 causes
records numbered 8 and greater to be renumbered upward by one. If a
cursor was positioned to record number 8 or greater before the
insertion, it will be shifted upward one logical record, continuing
to refer to the same record as it did before.
If a deleted record is not at the end of the database, all records
following the removed record will be automatically renumbered
downward by one. For example, deleting the record numbered 8 causes
records numbered 9 and greater to be renumbered downward by one. If
a cursor was positioned to record number 9 or greater before the
removal, it will be shifted downward one logical record, continuing
to refer to the same record as it did before.
If a record is deleted, all cursors that were positioned on that
record prior to the removal will no longer be positioned on a valid
entry. This includes cursors used to delete an item. For example, if
a cursor was positioned to record number 8 before the removal of
that record, subsequent calls to
will return false until the cursor is moved to another record. A
call to will return the new record
numbered 8 - which is the record that was numbered 9 prior to the
delete (if such a record existed).
For these reasons, concurrent access to a
with this setting specified may be
largely meaningless, although it is supported.
If the database already exists, this setting must be the same as the
existing database or an exception will be thrown.
If true, any file will be read in its
entirety when is called. If false,
may be read lazily.
The policy for how to handle database creation.
If the database does not already exist and
is set,
will fail.
A function to call after the record number has been selected but
before the data has been stored into the database.
When using , it may be useful to
modify the stored data based on the generated key. If a delegate is
specified, it will be called after the record number has been
selected, but before the data has been stored.
The underlying source file for the Recno access method.
The purpose of the source file is to provide fast access and
modification to databases that are normally stored as flat text
files.
The source parameter specifies an underlying flat text database file
that is read to initialize a transient record number index. In the
case of variable length records, the records are separated, as
specified by . For example, standard UNIX
byte stream files can be interpreted as a sequence of variable
length records separated by newline characters.
In addition, when cached data would normally be written back to the
underlying database file (for example,
or
), the in-memory copy of the
database will be written back to the source file.
By default, the backing source file is read lazily; that is, records
are not read from the file until they are requested by the
application. If multiple processes (not threads) are accessing a
Recno database concurrently, and are either inserting or deleting
records, the backing source file must be read in its entirety before
more than a single process accesses the database, and only that
process should specify the backing source file as part of the
call. See
for more information.
Reading and writing the backing source file specified by source
cannot be transaction-protected because it involves filesystem
operations that are not part of the Db transaction methodology. For
this reason, if a temporary database is used to hold the records, it
is possible to lose the contents of the source file, for example, if
the system crashes at the right instant. If a file is used to hold
the database, normal database recovery on that file can be used to
prevent information loss, although it is still possible that the
contents of source will be lost if the system crashes.
The source file must already exist (but may be zero-length) when
is called.
It is not an error to specify a read-only source file when creating
a database, nor is it an error to modify the resulting database.
However, any attempt to write the changes to the backing source file
using either the or
methods will fail, of course.
Use to stop it from
attempting to write the changes to the backing file; instead, they
will be silently discarded.
For all of the previous reasons, the source file is generally used
to specify databases that are read-only for Berkeley DB
applications; and that are either generated on the fly by software
tools or modified using a different mechanism — for example, a text
editor.
If the database already exists, BackingFile must be the same as that
historically used to create the database or corruption can occur.
Instantiate a new RecnoDatabaseConfig object
The delimiting byte used to mark the end of a record in
.
This byte is used for variable length records if
is set. If is
specified and no delimiting byte was specified, newline characters
(that is, ASCII 0x0a) are interpreted as end-of-record markers.
If the database already exists, this setting will be ignored.
Specify that the records are fixed-length, not byte-delimited, and
are of length Length.
Any records added to the database that are less than Length bytes
long are automatically padded (see for more
information).
Any attempt to insert records into the database that are greater
than Length bytes long will cause the call to fail immediately and
return an error.
If the database already exists, this setting will be ignored.
The padding character for short, fixed-length records.
If no pad character is specified, space characters (that is, ASCII
0x20) are used for padding.
If the database already exists, this setting will be ignored.
Statistical information about a HashDatabase
Number of big key/data pages.
Bytes free on big item pages.
Bytes free on bucket pages.
Number of dup pages.
Bytes free on duplicate pages.
Fill factor specified at create.
Pages on the free list.
Metadata flags.
Magic number.
Number of data items.
Number of hash buckets.
Number of unique keys.
Number of overflow pages.
Bytes free on ovfl pages.
Page count.
Page size.
Version number.
A class representing configuration parameters for
The isolation degree the cursor should use.
ensures the stability of the
current data item read by this cursor but permits data read by this
cursor to be modified or deleted prior to the commit of the
transaction for this cursor.
allows read operations performed
by the cursor to return modified but not yet committed data.
Silently ignored if the
was not specified when the underlying database was opened.
If true, specify that the cursor will be used to update the
database. The underlying database environment must have been opened
with set.
Configure a transactional cursor to operate with read-only snapshot
isolation. For databases with
set, data values will be read as they are when the cursor is opened,
without taking read locks.
This setting implicitly begins a transaction that is committed when
the cursor is closed.
This setting is silently ignored if
is not set on the underlying
database or if a transaction is supplied to
The cache priority for pages referenced by the cursor.
The priority of a page biases the replacement algorithm to be more
or less likely to discard a page when space is needed in the buffer
pool. The bias is temporary, and pages will eventually be discarded
if they are not referenced again. The setting is only advisory, and
does not guarantee pages will be treated in a specific way.
Instantiate a new CursorConfig object
A class representing configuration parameters for
The policy for how to handle database creation.
If the database does not already exist and
is set,
will fail.
Instantiate a new SecondaryQueueDatabaseConfig object
Specify the length of records in the database.
The record length must be enough smaller than
that at least one record plus
the database page's metadata information can fit on each database
page.
Any records added to the database that are less than Length bytes
long are automatically padded (see for more
information).
Any attempt to insert records into the database that are greater
than Length bytes long will cause the call to fail immediately and
return an error.
If the database already exists, this setting will be ignored.
The padding character for short, fixed-length records.
If no pad character is specified, space characters (that is, ASCII
0x20) are used for padding.
If the database already exists, this setting will be ignored.
The size of the extents used to hold pages in a
, specified as a number of
pages.
Each extent is created as a separate physical file. If no extent
size is set, the default behavior is to create only a single
underlying database file.
For information on tuning the extent size, see Selecting a extent
size in the Programmer's Reference Guide.
If the database already exists, this setting will be ignored.
A class representing a SecondaryHashDatabase. The Hash format is an
extensible, dynamic hashing scheme.
Instantiate a new SecondaryHashDatabase object, open the
database represented by and associate
the database with the
primary index.
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
A new, open database object
Instantiate a new SecondaryHashDatabase object, open the
database represented by and associate
the database with the
primary index.
If both and
are null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe. If
is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
A new, open database object
Instantiate a new SecondaryHashDatabase object, open the
database represented by and associate
the database with the
primary index.
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
Instantiate a new SecondaryHashDatabase object, open the
database represented by and associate
the database with the
primary index.
If both and
are null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe. If
is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
The secondary Hash key comparison function. The comparison function
is called whenever it is necessary to compare a key specified by the
application with a key currently stored in the tree.
The duplicate data item comparison function.
Whether the insertion of duplicate data items in the database is
permitted, and whether duplicates items are sorted.
The desired density within the hash table.
A user-defined hash function; if no hash function is specified, a
default hash function is used.
An estimate of the final size of the hash table.
Statistical information about the Replication Manager
Existing connections dropped.
# msgs discarded due to excessive queue length.
Failed new connection attempts.
# of insufficiently ack'ed msgs.
# msgs queued for network delay.
A class representing configuration parameters for a
's replication subsystem.
Instantiate a new ReplicationConfig object with default
configuration values.
If true, the replication master will send groups of records to the
clients in a single network transfer
If true, the client will delay synchronizing to a newly declared
master (defaults to false). Clients configured in this way will
remain unsynchronized until the application calls
.
If true, master leases will be used for this site (defaults to
false).
Configuring this option may result in a
when attempting to read entries
from a database after the site's master lease has expired.
If true, the replication master will not automatically re-initialize
outdated clients (defaults to false).
If true, Berkeley DB method calls that would normally block while
clients are in recovery will return errors immediately (defaults to
false).
If true, the Replication Manager will observe the strict "majority"
rule in managing elections, even in a group with only 2 sites. This
means the client in a 2-site group will be unable to take over as
master if the original master fails or becomes disconnected. (See
the Elections section in the Berkeley DB Reference Guide for more
information.) Both sites in the replication group should have the
same value for this parameter.
Set the clock skew ratio among replication group members based on
the fastest and slowest measurements among the group for use with
master leases.
Calling this method is optional, the default values for clock skew
assume no skew. The user must also configure leases via
. Additionally, the user must also
set the master lease timeout via and
the number of sites in the replication group via
. These settings may be configured in any
order. For a description of the clock skew values, see Clock skew
in the Berkeley DB Programmer's Reference Guide. For a description
of master leases, see Master leases in the Berkeley DB Programmer's
Reference Guide.
These arguments can be used to express either raw measurements of a
clock timing experiment or a percentage across machines. For
instance a group of sites have a 2% variance, then
should be set to 102, and
should be set to 100. Or, for a 0.03%
difference, you can use 10003 and 10000 respectively.
The value, relative to , of the fastest clock
in the group of sites.
The value of the slowest clock in the group of sites.
Set a threshold for the minimum and maximum time that a client waits
before requesting retransmission of a missing message.
If the client detects a gap in the sequence of incoming log records
or database pages, Berkeley DB will wait for at least
microseconds before requesting retransmission
of the missing record. Berkeley DB will double that amount before
requesting the same missing record again, and so on, up to a
maximum threshold of microseconds.
These values are thresholds only. Since Berkeley DB has no thread
available in the library as a timer, the threshold is only checked
when a thread enters the Berkeley DB library to process an incoming
replication message. Any amount of time may have passed since the
last message arrived and Berkeley DB only checks whether the amount
of time since a request was made is beyond the threshold value or
not.
By default the minimum is 40000 and the maximum is 1280000 (1.28
seconds). These defaults are fairly arbitrary and the application
likely needs to adjust these. The values should be based on expected
load and performance characteristics of the master and client host
platforms and transport infrastructure as well as round-trip message
time.
The minimum number of microseconds a client waits before requesting
retransmission.
The maximum number of microseconds a client waits before requesting
retransmission.
Set a byte-count limit on the amount of data that will be
transmitted from a site in response to a single message processed by
. The limit is
not a hard limit, and the record that exceeds the limit is the last
record to be sent.
Record transmission throttling is turned on by default with a limit
of 10MB.
If both and are
zero, then the transmission limit is turned off.
The number of gigabytes which, when added to
, specifies the maximum number of bytes that
will be sent in a single call to
.
The number of bytes which, when added to
, specifies the maximum number of bytes
that will be sent in a single call to
.
The delegate used to transmit data using the replication
application's communication infrastructure.
Specify how master and client sites will handle acknowledgment of
replication messages which are necessary for "permanent" records.
The current implementation requires all sites in a replication group
configure the same acknowledgement policy.
The host information for the local system.
Add a new replication site to the replication manager's list of
known sites. It is not necessary for all sites in a replication
group to know about all other sites in the group.
Currently, the replication manager framework only supports a single
client peer, and the last specified peer is used.
The remote site's address
If true, configure client-to-client synchronization with the
specified remote site.
The amount of time the replication manager's transport function
waits to collect enough acknowledgments from replication group
clients, before giving up and returning a failure indication. The
default wait time is 1 second.
The amount of time a master site will delay between completing a
checkpoint and writing a checkpoint record into the log.
This delay allows clients to complete their own checkpoints before
the master requires completion of them. The default is 30 seconds.
If all databases in the environment, and the environment's
transaction log, are configured to reside in memory (never preserved
to disk), then, although checkpoints are still necessary, the delay
is not useful and should be set to 0.
The amount of time the replication manager will wait before trying
to re-establish a connection to another site after a communication
failure. The default wait time is 30 seconds.
The timeout period for an election. The default timeout is 2
seconds.
Configure the amount of time the replication manager will wait
before retrying a failed election. The default wait time is 10
seconds.
An optional configuration timeout period to wait for full election
participation the first time the replication group finds a master.
By default this option is turned off and normal election timeouts
are used. (See the Elections section in the Berkeley DB Reference
Guide for more information.)
The amount of time the replication manager, running at a client
site, waits for some message activity on the connection from the
master (heartbeats or other messages) before concluding that the
connection has been lost. When 0 (the default), no monitoring is
performed.
The frequency at which the replication manager, running at a master
site, broadcasts a heartbeat message in an otherwise idle system.
When 0 (the default), no heartbeat messages will be sent.
The amount of time a client grants its master lease to a master.
When using master leases all sites in a replication group must use
the same lease timeout value. There is no default value. If leases
are desired, this method must be called prior to calling
or
.
The value, relative to , of the fastest
clock in the group of sites.
The value of the slowest clock in the group of sites.
The total number of sites in the replication group.
This setting is typically used by applications which use the
Berkeley DB library "replication manager" support. (However, see
also , the
description of the nsites parameter.)
The database environment's priority in replication group elections.
A special value of 0 indicates that this environment cannot be a
replication group master. If not configured, then a default value
of 100 is used.
The minimum number of microseconds a client waits before requesting
retransmission.
The maximum number of microseconds a client waits before requesting
retransmission.
The gigabytes component of the byte-count limit on the amount of
data that will be transmitted from a site in response to a single
message processed by
.
The bytes component of the byte-count limit on the amount of data
that will be transmitted from a site in response to a single
message processed by
.
A class representing configuration parameters for a
's mutex subsystem.
The mutex alignment, in bytes.
It is sometimes advantageous to align mutexes on specific byte
boundaries in order to minimize cache line collisions. Alignment
specifies an alignment for mutexes allocated by Berkeley DB.
If the database environment already exists when
is called, the value of
Alignment will be ignored.
Configure the number of additional mutexes to allocate.
If both Increment and are set, the value of
Increment will be silently ignored.
If the database environment already exists when
is called, the value of
Increment will be ignored.
The total number of mutexes to allocate.
Berkeley DB allocates a default number of mutexes based on the
initial configuration of the database environment. That default
calculation may be too small if the application has an unusual need
for mutexes (for example, if the application opens an unexpectedly
large number of databases) or too large (if the application is
trying to minimize its memory footprint). MaxMutexes is used to
specify an absolute number of mutexes to allocate.
If both and MaxMutexes are set, the value of
Increment will be silently ignored.
If the database environment already exists when
is called, the value of
MaxMutexes will be ignored.
The number of spins test-and-set mutexes should execute before
blocking.
A function to call after the record number has been selected but before
the data has been stored into the database.
The data to be stored.
The generated record number.
A function to store a compressed key/data pair into a supplied buffer.
The key immediately preceding the application supplied key.
The data associated with prevKey.
The application supplied key.
The application supplied data.
The compressed data to be stored in the
database.
The number of compressed bytes written to
, or the required size of
, if too small.
True on success, false if dest is too small to contain the
compressed data. All other errors should throw an exception.
A function to decompress a key/data pair from a supplied buffer.
The key immediately preceding the key being decompressed.
The data associated with prevKey.
The data stored in the tree, that is, the compressed data.
The number of bytes read from .
Two new DatabaseEntry objects representing the decompressed
key/data pair.
The application-specified feedback function called to report Berkeley DB
operation progress.
An operation code specifying the Berkley DB operation
The percent of the operation that has been completed, specified as an
integer value between 0 and 100.
An application-specified comparison function.
The application supplied key.
The current tree's key.
An integer value less than, equal to, or greater than zero if the first
key parameter is considered to be respectively less than, equal to, or
greater than the second key parameter.
The application-specified feedback function called to report Berkeley DB
operation progress.
An operation code specifying the Berkley DB operation
The percent of the operation that has been completed, specified as an
integer value between 0 and 100.
The application-specified error reporting function.
The prefix string
The error message string
The application's event notification function.
An even code specifying the Berkeley DB event
Additional information describing an event. By default, event_info is
null; specific events may pass non-null values, in which case the event
will also describe the information's structure.
The application-specified hash function.
A byte string representing a key in the database
The hashed value of
The function used to transmit data using the replication application's
communication infrastructure.
The first of the two data elements to be transmitted by the send
function.
The second of the two data elements to be transmitted by the send
function.
If the type of message to be sent has an LSN associated with it, then
this is the LSN of the record being sent. This LSN can be used to
determine that certain records have been processed successfully by
clients.
A positive integer identifier that specifies the replication environment
to which the message should be sent.
The special identifier DB_EID_BROADCAST indicates that a message should
be broadcast to every environment in the replication group. The
application may use a true broadcast protocol or may send the message
in sequence to each machine with which it is in communication. In both
cases, the sending site should not be asked to process the message.
The special identifier DB_EID_INVALID indicates an invalid environment
ID. This may be used to initialize values that are subsequently checked
for validity.
XXX: TBD
0 on success and non-zero on failure
The function that creates the set of secondary keys corresponding to a
given primary key and data pair.
The primary key
The primary data item
The secondary key(s)
A function which returns a unique identifier pair for a thread of
control in a Berkeley DB application.
A DbThreadID object describing the current thread of control
A function which returns an identifier pair for a thread of control
formatted for display.
The thread of control to format
The formatted identifier pair
A function which returns whether the thread of control, identified by
, is still running.
The thread of control to check
If true, return only if the process is alive, and the
portion of
should be ignored.
True if the tread is alive, false otherwise.
A class representing configuration parameters for
Create a new object, with default settings
Configuration for the locking subsystem
Configuration for the logging subsystem
Configuration for the memory pool subsystem
Configuration for the mutex subsystem
Configuration for the replication subsystem
The mechanism for reporting detailed error messages to the
application.
When an error occurs in the Berkeley DB library, a
, or subclass of DatabaseException,
is thrown. In some cases, however, the exception may be insufficient
to completely describe the cause of the error, especially during
initial application debugging.
In some cases, when an error occurs, Berkeley DB will call the given
delegate with additional error information. It is up to the delegate
to display the error message in an appropriate manner.
Setting ErrorFeedback to NULL unconfigures the callback interface.
This error-logging enhancement does not slow performance or
significantly increase application size, and may be run during
normal operation as well as during application debugging.
Monitor progress within long running operations.
Some operations performed by the Berkeley DB library can take
non-trivial amounts of time. The Feedback delegate can be used by
applications to monitor progress within these operations. When an
operation is likely to take a long time, Berkeley DB will call the
specified delegate with progress information.
It is up to the delegate to display this information in an
appropriate manner.
A delegate which is called to notify the process of specific
Berkeley DB events.
A delegate that returns a unique identifier pair for the current
thread of control.
This delegate supports .
For more information, see Architecting Data Store and Concurrent
Data Store applications, and Architecting Transactional Data Store
applications, both in the Berkeley DB Programmer's Reference Guide.
A delegate that formats a process ID and thread ID identifier pair.
A delegate that returns if a thread of control (either a true thread
or a process) is still running.
Paths of directories to be used as the location of the access method
database files.
Paths specified to will be searched
relative to this path. Paths set using this method are additive, and
specifying more than one will result in each specified directory
being searched for database files.
If no database directories are specified, database files must be
named either by absolute paths or relative to the environment home
directory. See Berkeley DB File Naming in the Programmer's Reference
Guide for more information.
The path of a directory to be used as the location to create the
access method database files. When ,
, or
is used to create a file it will be
created relative to this path.
This path must also exist in .
If no database directory is specified, database files must be named
either by absolute paths or relative to the environment home
directory. See Berkeley DB File Naming in the Programmer's Reference
Guide for more information.
Set the password and algorithm used by the Berkeley DB library to
perform encryption and decryption.
The password used to perform encryption and decryption.
The algorithm used to perform encryption and decryption.
The prefix string that appears before error messages issued by
Berkeley DB.
For databases opened inside of a DatabaseEnvironment, setting
ErrorPrefix affects the entire environment and is equivalent to
setting .
The permissions for any intermediate directories created by Berkeley
DB.
By default, Berkeley DB does not create intermediate directories
needed for recovery, that is, if the file /a/b/c/mydatabase is being
recovered, and the directory path b/c does not exist, recovery will
fail. This default behavior is because Berkeley DB does not know
what permissions are appropriate for intermediate directory
creation, and creating the directory might result in a security
problem.
Directory permissions are interpreted as a string of nine
characters, using the character set r (read), w (write), x (execute
or search), and - (none). The first character is the read
permissions for the directory owner (set to either r or -). The
second character is the write permissions for the directory owner
(set to either w or -). The third character is the execute
permissions for the directory owner (set to either x or -).
Similarly, the second set of three characters are the read, write
and execute/search permissions for the directory group, and the
third set of three characters are the read, write and execute/search
permissions for all others. For example, the string rwx------ would
configure read, write and execute/search access for the owner only.
The string rwxrwx--- would configure read, write and execute/search
access for both the owner and the group. The string rwxr----- would
configure read, write and execute/search access for the directory
owner and read-only access for the directory group.
The path of a directory to be used as the location of temporary
files.
The files created to back in-memory access method databases will be
created relative to this path. These temporary files can be quite
large, depending on the size of the database.
If no directories are specified, the following alternatives are
checked in the specified order. The first existing directory path is
used for all temporary files.
- The value of the environment variable TMPDIR.
- The value of the environment variable TEMP.
- The value of the environment variable TMP.
- The value of the environment variable TempFolder.
- The value returned by the GetTempPath interface.
- The directory /var/tmp.
- The directory /usr/tmp.
- The directory /temp.
- The directory /tmp.
- The directory C:/temp.
- The directory C:/tmp.
Environment variables are only checked if
is true.
Specific additional informational and debugging messages in the
Berkeley DB message output.
If true, database operations for which no explicit transaction
handle was specified, and which modify databases in the database
environment, will be automatically enclosed within a transaction.
If true, Berkeley DB Concurrent Data Store applications will perform
locking on an environment-wide basis rather than on a per-database
basis.
If true, Berkeley DB will flush database writes to the backing disk
before returning from the write system call, rather than flushing
database writes explicitly in a separate system call, as necessary.
This is only available on some systems (for example, systems
supporting the IEEE/ANSI Std 1003.1 (POSIX) standard O_DSYNC flag,
or systems supporting the Windows FILE_FLAG_WRITE_THROUGH flag).
This flag may result in inaccurate file modification times and other
file-level information for Berkeley DB database files. This flag
will almost certainly result in a performance decrease on most
systems. This flag is only applicable to certain filesysystems (for
example, the Veritas VxFS filesystem), where the filesystem's
support for trickling writes back to stable storage behaves badly
(or more likely, has been misconfigured).
If true, Berkeley DB will page-fault shared regions into memory when
initially creating or joining a Berkeley DB environment. In
addition, Berkeley DB will write the shared regions when creating an
environment, forcing the underlying virtual memory and filesystems
to instantiate both the necessary memory and the necessary disk
space. This can also avoid out-of-disk space failures later on.
In some applications, the expense of page-faulting the underlying
shared memory regions can affect performance. (For example, if the
page-fault occurs while holding a lock, other lock requests can
convoy, and overall throughput may decrease.)
If true, turn off system buffering of Berkeley DB database files to
avoid double caching.
If true, Berkeley DB will grant all requested mutual exclusion
mutexes and database locks without regard for their actual
availability. This functionality should never be used for purposes
other than debugging.
If true, Berkeley DB will copy read-only database files into the
local cache instead of potentially mapping them into process memory
(see for further information).
If true, Berkeley DB will ignore any panic state in the database
environment. (Database environments in a panic state normally refuse
all attempts to call Berkeley DB functions, throwing
. This functionality should never
be used for purposes other than debugging.
If true, overwrite files stored in encrypted formats before deleting
them.
Berkeley DB overwrites files using alternating 0xff, 0x00 and 0xff
byte patterns. For file overwriting to be effective, the underlying
file must be stored on a fixed-block filesystem. Systems with
journaling or logging filesystems will require operating system
support and probably modification of the Berkeley DB sources.
If true, database calls timing out based on lock or transaction
timeout values will throw
instead of . This allows applications
to distinguish between operations which have deadlocked and
operations which have exceeded their time limits.
If true, Berkeley DB will not write or synchronously flush the log
on transaction commit.
This means that transactions exhibit the ACI (atomicity,
consistency, and isolation) properties, but not D (durability); that
is, database integrity will be maintained, but if the application or
system fails, it is possible some number of the most recently
committed transactions may be undone during recovery. The number of
transactions at risk is governed by how many log updates can fit
into the log buffer, how often the operating system flushes dirty
buffers to disk, and how often the log is checkpointed.
If true and a lock is unavailable for any Berkeley DB operation
performed in the context of a transaction, cause the operation to
throw (or
if
is set.
If true, all transactions in the environment will be started as if
were passed to
, and all
non-transactional cursors will be opened as if
were passed to
.
If true, Berkeley DB will write, but will not synchronously flush,
the log on transaction commit.
This means that transactions exhibit the ACI (atomicity,
consistency, and isolation) properties, but not D (durability); that
is, database integrity will be maintained, but if the system fails,
it is possible some number of the most recently committed
transactions may be undone during recovery. The number of
transactions at risk is governed by how often the system flushes
dirty buffers to disk and how often the log is checkpointed.
If true, all databases in the environment will be opened as if
is passed to
. This flag will be ignored for queue
databases for which MVCC is not supported.
If true, Berkeley DB will yield the processor immediately after each
page or mutex acquisition. This functionality should never be used
for purposes other than stress testing.
If true, Berkeley DB subsystems will create any underlying files, as
necessary.
If true, the created object will
be free-threaded; that is, concurrently usable by multiple threads
in the address space.
Required to be true if the created
object will be concurrently used by more than one thread in the
process, or if any objects opened in the
scope of the object will be
concurrently used by more than one thread in the process.
Required to be true when using the Replication Manager.
If true, lock shared Berkeley DB environment files and memory-mapped
databases into memory.
If true, allocate region memory from the heap instead of from memory
backed by the filesystem or system shared memory.
This setting implies the environment will only be accessed by a
single process (although that process may be multithreaded). This
flag has two effects on the Berkeley DB environment. First, all
underlying data structures are allocated from per-process memory
instead of from shared memory that is accessible to more than a
single process. Second, mutexes are only configured to work between
threads.
This setting should be false if more than a single process is
accessing the environment because it is likely to cause database
corruption and unpredictable behavior. For example, if both a server
application and Berkeley DB utilities (for example, db_archive,
db_checkpoint or db_stat) are expected to access the environment,
this setting should be false.
If true, check to see if recovery needs to be performed before
opening the database environment. (For this check to be accurate,
all processes using the environment must specify it when opening the
environment.)
If recovery needs to be performed for any reason (including the
initial use of this setting), and is also
specified, recovery will be performed and the open will proceed
normally. If recovery needs to be performed and
is not specified,
will be thrown. If recovery does
not need to be performed, will be ignored.
See Architecting Transactional Data Store applications in the
Programmer's Reference Guide for more information.
If true, catastrophic recovery will be run on this environment
before opening it for normal use.
If true, the and must
also be set, because the regions will be removed and re-created,
and transactions are required for application recovery.
If true, normal recovery will be run on this environment before
opening it for normal use.
If true, the and must
also be set, because the regions will be removed and re-created,
and transactions are required for application recovery.
If true, allocate region memory from system shared memory instead of
from heap memory or memory backed by the filesystem.
See Shared Memory Regions in the Programmer's Reference Guide for
more information.
If true, the Berkeley DB process' environment may be permitted to
specify information to be used when naming files.
See Berkeley DB File Naming in the Programmer's Reference Guide for
more information.
Because permitting users to specify which files are used can create
security problems, environment information will be used in file
naming for all users only if UseEnvironmentVars is true.
If true, initialize locking for the Berkeley DB Concurrent Data
Store product.
In this mode, Berkeley DB provides multiple reader/single writer
access. The only other subsystem that should be specified with
UseCDB flag is .
If true, initialize the locking subsystem.
This subsystem should be used when multiple processes or threads are
going to be reading and writing a Berkeley DB database, so that they
do not interfere with each other. If all threads are accessing the
database(s) read-only, locking is unnecessary. When UseLocking is
specified, it is usually necessary to run a deadlock detector, as
well. See for more
information.
If true, initialize the logging subsystem.
This subsystem should be used when recovery from application or
system failure is necessary. If the log region is being created and
log files are already present, the log files are reviewed;
subsequent log writes are appended to the end of the log, rather
than overwriting current log entries.
If true, initialize the shared memory buffer pool subsystem.
This subsystem should be used whenever an application is using any
Berkeley DB access method.
If true, initialize the replication subsystem.
This subsystem should be used whenever an application plans on using
replication. UseReplication requires and
also be set.
If true, initialize the transaction subsystem.
This subsystem should be used when recovery and atomicity of
multiple operations are important. UseTxns implies
.
The password used to perform encryption and decryption.
The algorithm used to perform encryption and decryption.
A value, in microseconds, representing lock timeouts.
All timeouts are checked whenever a thread of control blocks on a
lock or when deadlock detection is performed. As timeouts are only
checked when the lock request first blocks or when deadlock
detection is performed, the accuracy of the timeout depends on how
often deadlock detection is performed.
Timeout values specified for the database environment may be
overridden on a per-transaction basis, see
.
The number of active transactions supported by the environment. This
value bounds the size of the memory allocated for transactions.
Child transactions are counted as active until they either commit or
abort.
Transactions that update multiversion databases are not freed until
the last page version that the transaction created is flushed from
cache. This means that applications using multi-version concurrency
control may need a transaction for each page in cache, in the
extreme case.
When all of the memory available in the database environment for
transactions is in use, calls to
will fail (until
some active transactions complete). If MaxTransactions is never set,
the database environment is configured to support at least 100
active transactions.
An approximate number of threads in the database environment.
ThreadCount must set if
will be used. ThreadCount does not set the maximum number of threads
but is used to determine memory sizing and the thread control block
reclamation policy.
If a process has not configured , and
then attempts to join a database environment configured for failure
checking with ,
, and
ThreadCount, the program may be unable to allocate a thread control
block and fail to join the environment. This is true of the
standalone Berkeley DB utility programs. To avoid problems when
using the standalone Berkeley DB utility programs with environments
configured for failure checking, incorporate the utility's
functionality directly in the application, or call
before running the
utility.
A value, in microseconds, representing transaction timeouts.
All timeouts are checked whenever a thread of control blocks on a
lock or when deadlock detection is performed. As timeouts are only
checked when the lock request first blocks or when deadlock
detection is performed, the accuracy of the timeout depends on how
often deadlock detection is performed.
Timeout values specified for the database environment may be
overridden on a per-transaction basis, see
.
Recover to the time specified by timestamp rather than to the most
current possible date.
Once a database environment has been upgraded to a new version of
Berkeley DB involving a log format change (see Upgrading Berkeley DB
installations in the Programmer's Reference Guide), it is no longer
possible to recover to a specific time before that upgrade.
A class representing a SecondaryBTreeDatabase. The Btree format is a
representation of a sorted, balanced tree structure.
Instantiate a new SecondaryBTreeDatabase object, open the
database represented by and associate
the database with the
primary index.
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
A new, open database object
Instantiate a new SecondaryBTreeDatabase object, open the
database represented by and associate
the database with the
primary index.
If both and
are null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe. If
is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
A new, open database object
Instantiate a new SecondaryBTreeDatabase object, open the
database represented by and associate
the database with the
primary index.
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
Instantiate a new SecondaryBTreeDatabase object, open the
database represented by and associate
the database with the
primary index.
If both and
are null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe. If
is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
The Btree key comparison function. The comparison function is called
whenever it is necessary to compare a key specified by the
application with a key currently stored in the tree.
The duplicate data item comparison function.
Whether the insertion of duplicate data items in the database is
permitted, and whether duplicates items are sorted.
The minimum number of key/data pairs intended to be stored on any
single Btree leaf page.
If false, empty pages will not be coalesced into higher-level pages.
The Btree prefix function. The prefix function is used to determine
the amount by which keys stored on the Btree internal pages can be
safely truncated without losing their uniqueness.
If true, this object supports retrieval from the Btree using record
numbers.
The policy for how to handle database creation.
Never create the database.
Create the database if it does not already exist.
Do not open the database and return an error if it already exists.
Specifies the database operation whose progress is being reported
The underlying database is being upgraded.
The underlying database is being verified.
Policy for duplicate data items in the database; that is, whether insertion
when the key of the key/data pair being inserted already exists in the
database will be successful.
Insertion when the key of the key/data pair being inserted already
exists in the database will fail.
Duplicates are allowed and mainted in sorted order, as determined by the
duplicate comparison function.
Duplicates are allowed and ordered in the database by the order of
insertion, unless the ordering is otherwise specified by use of a cursor
operation or a duplicate sort function.
Specifies an algorithm used for encryption and decryption
The default algorithm, or the algorithm previously used in an
existing environment
The Rijndael/AES algorithm
Also known as the Advanced Encryption Standard and Federal
Information Processing Standard (FIPS) 197
Specifies the environment operation whose progress is being reported
The environment is being recovered.
Specifies the action to take when deleting a foreign key
Abort the deletion.
Delete records that refer to the foreign key
Nullify records that refer to the foreign key
Specify the degree of isolation for transactional operations
Read operations on the database may request the return of modified
but not yet committed data.
Provide for cursor stability but not repeatable reads. Data items
which have been previously read by a transaction may be deleted or
modified by other transactions before the original transaction
completes.
For the life of the transaction, every time a thread of control
reads a data item, it will be unchanged from its previous value
(assuming, of course, the thread of control does not itself modify
the item). This is Berkeley DB's default degree of isolation.
Specify a Berkeley DB event
The database environment has failed.
All threads of control in the database environment should exit the
environment, and recovery should be run.
The local site is now a replication client.
The local replication site has just won an election.
An application using the Base replication API should arrange for a
call to
after
receiving this event, to reconfigure the local environment as a
replication master.
Replication Manager applications may safely ignore this event. The
Replication Manager calls
automatically on behalf of the application when appropriate
(resulting in firing of the event).
The local site is now the master site of its replication group. It
is the application's responsibility to begin acting as the master
environment.
The replication group of which this site is a member has just
established a new master; the local site is not the new master. The
event_info parameter to the
stores an integer containing the environment ID of the new master.
The replication manager did not receive enough acknowledgements
(based on the acknowledgement policy configured with
) to ensure a
transaction's durability within the replication group. The
transaction will be flushed to the master's local disk storage for
durability.
This event is provided only to applications configured for the
replication manager.
The client has completed startup synchronization and is now
processing live log records received from the master.
A Berkeley DB write to stable storage failed.
A class to represent what lock request(s) should be rejected during
deadlock resolution.
If no DeadlockPolicy has yet been specified, use
.
Reject lock requests which have timed out. No other deadlock
detection is performed.
Reject the lock request for the locker ID with the most locks.
Reject the lock request for the locker ID with the most write locks.
Reject the lock request for the locker ID with the fewest locks.
Reject the lock request for the locker ID with the fewest write
locks.
Reject the lock request for the locker ID with the oldest lock.
Reject the lock request for a random locker ID.
Reject the lock request for the locker ID with the youngest lock.
A class to represent information about the Berkeley DB cache
The number of gigabytes in the cache
The number of bytes in the cache
The number of caches
Create a new CacheInfo object. The size of the cache is set to
gbytes gigabytes plus bytes and spread over numCaches separate
caches.
The number of gigabytes in the cache
The number of bytes in the cache
The number of caches
A class that provides an arbitrary number of persistent objects that
return an increasing or decreasing sequence of integers.
Instantiate a new Sequence object.
If is null and the operation occurs in a
transactional database, the operation will be implicitly transaction
protected.
Configuration parameters for the Sequence
Instantiate a new Sequence object.
If is null and the operation occurs in a
transactional database, the operation will be implicitly transaction
protected.
Configuration parameters for the Sequence
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
Close the sequence handle. Any unused cached values are lost.
Return the next available element in the sequence and change the
sequence value by .
If there are enough cached values in the sequence handle then they
will be returned. Otherwise the next value will be fetched from the
database and incremented (decremented) by enough to cover the delta
and the next batch of cached values.
For maximum concurrency a non-zero cache size should be specified
prior to opening the sequence handle and
should be specified for each Get method call.
By default, sequence ranges do not wrap; to cause the sequence to
wrap around the beginning or end of its range, set
to true.
If was opened in a transaction,
calling Get may result in changes to the sequence object; these
changes will be automatically committed in a transaction internal to
the Berkeley DB library. If the thread of control calling Get has an
active transaction, which holds locks on the same database as the
one in which the sequence object is stored, it is possible for a
thread of control calling Get to self-deadlock because the active
transaction's locks conflict with the internal transaction's locks.
For this reason, it is often preferable for sequence objects to be
stored in their own database.
The amount by which to increment the sequence value. Must be
greater than 0.
The next available element in the sequence.
Return the next available element in the sequence and change the
sequence value by .
The amount by which to increment the sequence value. Must be
greater than 0.
If true, and if the operation is implicitly transaction protected,
do not synchronously flush the log when the transaction commits.
The next available element in the sequence.
Return the next available element in the sequence and change the
sequence value by .
The amount by which to increment the sequence value. Must be
greater than 0.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
Must be null if the sequence was opened with a non-zero cache size.
The next available element in the sequence.
Print diagnostic information.
Print diagnostic information.
The diagnostic information is described by
.
If true, reset statistics after printing.
Remove the sequence from the database.
Remove the sequence from the database.
If true, and if the operation is implicitly transaction protected,
do not synchronously flush the log when the transaction commits.
Remove the sequence from the database.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
Return statistical information for this sequence.
Statistical information for this sequence.
Return statistical information for this sequence.
In the presence of multiple threads or processes accessing an active
sequence, the information returned by DB_SEQUENCE->stat() may be
out-of-date.
The DB_SEQUENCE->stat() method cannot be transaction-protected. For
this reason, it should be called in a thread of control that has no
open cursors or active transactions.
If true, reset statistics.
Statistical information for this sequence.
Release the resources held by this object, and close the sequence if
it's still open.
The database used by the sequence.
The key for the sequence.
The current cache size.
The minimum value in the sequence.
The maximum value in the sequence.
If true, the sequence should wrap around when it is incremented
(decremented) past the specified maximum (minimum) value.
If true, the sequence will be incremented. This is the default.
If true, the sequence will be decremented.
A class representing an estimate of the proportion of keys that are less
than, equal to, and greater than a given key.
Values are in the range of 0 to 1; for example, if the field less is
0.05, 5% of the keys in the database are less than the key parameter.
The value for equal will be zero if there is no matching key, and will
be non-zero otherwise.
A value between 0 and 1, the proportion of keys less than the
specified key.
A value between 0 and 1, the proportion of keys equal to the
specified key.
A value between 0 and 1, the proportion of keys greater than the
specified key.
Statistical information about the mutex subsystem
Mutex alignment
Available mutexes
Mutex count
Mutexes in use
Maximum mutexes ever in use
Region lock granted without wait.
Region size.
Region lock granted after wait.
Mutex test-and-set spins
A class representing configuration parameters for a
's memory pool subsystem.
The size of the shared memory buffer pool — that is, the cache.
The cache should be the size of the normal working data set of the
application, with some small amount of additional memory for unusual
situations. (Note: the working set is not the same as the number of
pages accessed simultaneously, and is usually much larger.)
The default cache size is 256KB, and may not be specified as less
than 20KB. Any cache size less than 500MB is automatically increased
by 25% to account for buffer pool overhead; cache sizes larger than
500MB are used as specified. The maximum size of a single cache is
4GB on 32-bit systems and 10TB on 64-bit systems. (All sizes are in
powers-of-two, that is, 256KB is 2^18 not 256,000.) For information
on tuning the Berkeley DB cache size, see Selecting a cache size in
the Programmer's Reference Guide.
The maximum cache size.
The specified size is rounded to the nearest multiple of the cache
region size, which is the initial cache size divided by
CacheSize.NCaches. If no value
is specified, it defaults to the initial cache size.
Limit the number of sequential write operations scheduled by the
library when flushing dirty pages from the cache.
The maximum number of sequential write operations scheduled by the
library when flushing dirty pages from the cache, or 0 if there is
no limitation on the number of sequential write operations.
The number of microseconds the thread of control should pause before
scheduling further write operations. It must be specified as an
unsigned 32-bit number of microseconds, limiting the maximum pause
to roughly 71 minutes.
The number of file descriptors the library will open concurrently
when flushing dirty pages from the cache.
The number of microseconds the thread of control should pause before
scheduling further write operations.
The number of sequential write operations scheduled by the library
when flushing dirty pages from the cache.
The maximum file size, in bytes, for a file to be mapped into the
process address space. If no value is specified, it defaults to
10MB.
Files that are opened read-only in the cache (and that satisfy a few
other criteria) are, by default, mapped into the process address
space instead of being copied into the local cache. This can result
in better-than-usual performance because available virtual memory is
normally much larger than the local cache, and page faults are
faster than page copying on many systems. However, it can cause
resource starvation in the presence of limited virtual memory, and
it can result in immense process sizes in the presence of large
databases.
A class representing configuration parameters for a
's logging subsystem.
If true, Berkeley DB will automatically remove log files that are no
longer needed.
Automatic log file removal is likely to make catastrophic recovery
impossible.
Replication applications will rarely want to configure automatic log
file removal as it increases the likelihood a master will be unable
to satisfy a client's request for a recent log record.
If true, Berkeley DB will flush log writes to the backing disk
before returning from the write system call, rather than flushing
log writes explicitly in a separate system call, as necessary.
This is only available on some systems (for example, systems
supporting the IEEE/ANSI Std 1003.1 (POSIX) standard O_DSYNC flag,
or systems supporting the Windows FILE_FLAG_WRITE_THROUGH flag).
This flag may result in inaccurate file modification times and other
file-level information for Berkeley DB log files. This flag may
offer a performance increase on some systems and a performance
decrease on others.
If true, maintain transaction logs in memory rather than on disk.
This means that transactions exhibit the ACI (atomicity,
consistency, and isolation) properties, but not D (durability); that
is, database integrity will be maintained, but if the application or
system fails, integrity will not persist. All database files must be
verified and/or restored from a replication group master or archival
backup after application or system failure.
When in-memory logs are configured and no more log buffer space is
available, Berkeley DB methods may throw
. When choosing log buffer and
file sizes for in-memory logs, applications should ensure the
in-memory log buffer size is large enough that no transaction will
ever span the entire buffer, and avoid a state where the in-memory
buffer is full and no space can be freed because a transaction that
started in the first log "file" is still active.
If true, turn off system buffering of Berkeley DB log files to avoid
double caching.
If true, zero all pages of a log file when that log file is created.
This has shown to provide greater transaction throughput in some
environments. The log file will be zeroed by the thread which needs
to re-create the new log file. Other threads may not write to the
log file while this is happening.
The path of a directory to be used as the location of logging files.
Log files created by the Log Manager subsystem will be created in
this directory.
If no logging directory is specified, log files are created in the
environment home directory. See Berkeley DB File Naming in the
Programmer's Reference Guide for more information.
For the greatest degree of recoverability from system or application
failure, database files and log files should be located on separate
physical devices.
If the database environment already exists when
is called, the value of
Dir must be consistent with the existing environment or corruption
can occur.
The size of the in-memory log buffer, in bytes.
When the logging subsystem is configured for on-disk logging, the
default size of the in-memory log buffer is approximately 32KB. Log
information is stored in-memory until the storage space fills up or
transaction commit forces the information to be flushed to stable
storage. In the presence of long-running transactions or
transactions producing large amounts of data, larger buffer sizes
can increase throughput.
When the logging subsystem is configured for in-memory logging, the
default size of the in-memory log buffer is 1MB. Log information is
stored in-memory until the storage space fills up or transaction
abort or commit frees up the memory for new transactions. In the
presence of long-running transactions or transactions producing
large amounts of data, the buffer size must be sufficient to hold
all log information that can accumulate during the longest running
transaction. When choosing log buffer and file sizes for in-memory
logs, applications should ensure the in-memory log buffer size is
large enough that no transaction will ever span the entire buffer,
and avoid a state where the in-memory buffer is full and no space
can be freed because a transaction that started in the first log
"file" is still active.
If the database environment already exists when
is called, the value of
BufferSize will be ignored.
The absolute file mode for created log files.
This method is only useful for the rare Berkeley DB application that
does not control its umask value.
Normally, if Berkeley DB applications set their umask appropriately,
all processes in the application suite will have read permission on
the log files created by any process in the application suite.
However, if the Berkeley DB application is a library, a process
using the library might set its umask to a value preventing other
processes in the application suite from reading the log files it
creates. In this rare case, the DB_ENV->set_lg_filemode() method can
be used to set the mode of created log files to an absolute value.
The maximum size of a single file in the log, in bytes. Because
is an unsigned four-byte value, MaxFileSize
may not be larger than the maximum unsigned four-byte value.
When the logging subsystem is configured for on-disk logging, the
default size of a log file is 10MB.
When the logging subsystem is configured for in-memory logging, the
default size of a log file is 256KB. In addition, the
configured log buffer size must be
larger than the log file size. (The logging subsystem divides memory
configured for in-memory log records into "files", as database
environments configured for in-memory log records may exchange log
records with other members of a replication group, and those members
may be configured to store log records on-disk.) When choosing log
buffer and file sizes for in-memory logs, applications should ensure
the in-memory log buffer size is large enough that no transaction
will ever span the entire buffer, and avoid a state where the
in-memory buffer is full and no space can be freed because a
transaction that started in the first log "file" is still active.
See Log File Limits in the Programmer's Reference Guide for more
information.
If no size is specified by the application, the size last specified
for the database region will be used, or if no database region
previously existed, the default will be used.
Te size of the underlying logging area of the Berkeley DB
environment, in bytes.
By default, or if the value is set to 0, the default size is
approximately 60KB. The log region is used to store filenames, and
so may need to be increased in size if a large number of files will
be opened and registered with the specified Berkeley DB
environment's log manager.
If the database environment already exists when
is called, the value of
RegionSize will be ignored.
Represents errors that occur during Berkley DB operations.
The underlying error code from the Berkeley DB C library.
Throw an exception which corresponds to the specified Berkeley DB
error code.
The Berkeley DB error code
Create a new DatabaseException, encapsulating a specific error code.
The error code to encapsulate.
A secondary index has been corrupted. This is likely the result of an
application operating on related databases without first associating
them.
Initialize a new instance of the BadSecondaryException
Initialize a new instance of the ForeignConflictException
In-memory logs are configured and no more log buffer space is available.
Initialize a new instance of the FullLogBufferException
The requested key/data pair logically exists but was never explicitly
created by the application, or that the requested key/data pair was
deleted and never re-created. In addition, the Queue access method will
throw a KeyEmptyException for records that were created as part of a
transaction that was later aborted and never re-created.
The Recno and Queue access methods will automatically create key/data
pairs under some circumstances.
Initialize a new instance of the KeyEmptyException
A key/data pair was inserted into the database using
and the key already
exists in the database, or using
or
and the key/data
pair already exists in the database.
Initialize a new instance of the KeyExistException
When multiple threads of control are modifying the database, there is
normally the potential for deadlock. In Berkeley DB, deadlock is
signified by a DeadlockException thrown from the Berkeley DB function.
Whenever a Berkeley DB function throws a DeadlockException, the
enclosing transaction should be aborted.
Initialize a new instance of the DeadlockException
The site's replication master lease has expired.
Initialize a new instance of the LeaseExpiredException
If is true,
database calls timing out based on lock or transaction timeout values
will throw a LockNotGrantedException, instead of a DeadlockException.
Initialize a new instance of the LockNotGrantedException
Initialize a new instance of the MemoryException
The requested key/data pair did not exist in the database or that
start-of- or end-of-file has been reached by a cursor.
Initialize a new instance of the NotFoundException
This version of Berkeley DB is unable to upgrade a given database.
Initialize a new instance of the OldVersionException
Berkeley DB has encountered an error it considers fatal to an entire
environment. Once a RunRecoveryException has been thrown by any
interface, it will be returned from all subsequent Berkeley DB calls
made by any threads of control participating in the environment.
An example of this type of fatal error is a corrupted database page. The
only way to recover from this type of error is to have all threads of
control exit the Berkeley DB environment, run recovery of the
environment, and re-enter Berkeley DB. (It is not strictly necessary
that the processes exit, although that is the only way to recover system
resources, such as file descriptors and memory, allocated by
Berkeley DB.)
Initialize a new instance of the RunRecoveryException
Thrown by if a database is
corrupted, and by if all
key/data pairs in the file may not have been successfully output.
Initialize a new instance of the VerificationException
The version of the Berkeley DB library doesn't match the version that
created the database environment.
Initialize a new instance of the VersionMismatchException
A class representing a unique identifier for a thread of control in a
Berkeley DB application.
The Process ID of the thread of control
The Thread ID of the thread of control
Instantiate a new DbThreadID object
The Process ID of the thread of control
The Thread ID of the thread of control
A class providing access to multiple key/data pairs.
Return an enumerator which iterates over all
pairs represented by the
.
An enumerator for the
A class representing configuration parameters for
Policy for duplicate data items in the database; that is, insertion
when the key of the key/data pair being inserted already exists in
the database will be successful.
The ordering of duplicates in the database for
is determined by the order
of insertion, unless the ordering is otherwise specified by use of a
cursor operation or a duplicate sort function. The ordering of
duplicates in the database for
is determined by the
duplicate comparison function. If the application does not specify a
comparison function using
, a default lexical
comparison will be used.
is preferred to
for performance reasons.
should only be used by
applications wanting to order duplicate data items manually.
If the database already exists, the value of Duplicates must be the
same as the existing database or an error will be returned.
It is an error to specify and
anything other than .
Turn reverse splitting in the Btree on or off.
As pages are emptied in a database, the Berkeley DB Btree
implementation attempts to coalesce empty pages into higher-level
pages in order to keep the database as small as possible and
minimize search time. This can hurt performance in applications with
cyclical data demands; that is, applications where the database
grows and shrinks repeatedly. For example, because Berkeley DB does
page-level locking, the maximum level of concurrency in a database
of two pages is far smaller than that in a database of 100 pages, so
a database that has shrunk to a minimal size can cause severe
deadlocking when a new cycle of data insertion begins.
If true, support retrieval from the Btree using record numbers.
Logical record numbers in Btree databases are mutable in the face of
record insertion or deletion. See
for further discussion.
Maintaining record counts within a Btree introduces a serious point
of contention, namely the page locations where the record counts are
stored. In addition, the entire database must be locked during both
insertions and deletions, effectively single-threading the database
for those operations. Specifying UseRecordNumbers can result in
serious performance degradation for some applications and data sets.
It is an error to specify and
anything other than .
If the database already exists, the value of UseRecordNumbers must
be the same as the existing database or an error will be returned.
The policy for how to handle database creation.
If the database does not already exist and
is set,
will fail.
The Btree key comparison function.
The comparison function is called whenever it is necessary to
compare a key specified by the application with a key currently
stored in the tree.
If no comparison function is specified, the keys are compared
lexically, with shorter keys collating before longer keys.
If the database already exists, the comparison function must be the
same as that historically used to create the database or corruption
can occur.
The Btree prefix function.
The prefix function is used to determine the amount by which keys
stored on the Btree internal pages can be safely truncated without
losing their uniqueness. See the Btree prefix comparison section of
the Berkeley DB Reference Guide for more details about how this
works. The usefulness of this is data-dependent, but can produce
significantly reduced tree sizes and search times in some data sets.
If no prefix function or key comparison function is specified by the
application, a default lexical comparison function is used as the
prefix function. If no prefix function is specified and
is specified, no prefix function is
used. It is an error to specify a prefix function without also
specifying .
If the database already exists, the prefix function must be the
same as that historically used to create the database or corruption
can occur.
The duplicate data item comparison function.
The comparison function is called whenever it is necessary to
compare a data item specified by the application with a data item
currently stored in the database. Setting DuplicateCompare implies
setting to
.
If no comparison function is specified, the data items are compared
lexically, with shorter data items collating before longer data
items.
If the database already exists when
is called, the delegate
must be the same as that historically used to create the database or
corruption can occur.
Create a new SecondaryBTreeDatabaseConfig object
The minimum number of key/data pairs intended to be stored on any
single Btree leaf page.
This value is used to determine if key or data items will be stored
on overflow pages instead of Btree leaf pages. For more information
on the specific algorithm used, see the Berkeley DB Reference Guide.
The value specified must be at least 2; if not explicitly set, a
value of 2 is used.
If the database already exists, MinKeysPerPage will be ignored.
Statistical information about the logging subsystem
Log buffer size.
Bytes to log.
Bytes to log since checkpoint.
Current log file number.
Current log file offset.
Known on disk log file number.
Known on disk log file offset.
Log file size.
Megabytes to log.
Megabytes to log since checkpoint.
Log file magic number.
Max number of commits in a flush.
Min number of commits in a flush.
Overflow writes to the log.
Log file permissions mode.
Total I/O reads from the log.
Records entered into the log.
Region lock granted without wait.
Region lock granted after wait.
Region size.
Total syncs to the log.
Total I/O writes to the log.
Log file version number.
A class representing a join cursor, for use in performing equality or
natural joins on secondary indices. For information on how to organize
your data to use this functionality, see Equality join in the
Programmer's Reference Guide.
JoinCursor does not support many of the operations offered by
and is not a subclass of .
Discard the cursor.
It is possible for the Close() method to throw a
, signaling that any enclosing
transaction should be aborted. If the application is already
intending to abort the transaction, this error should be ignored,
and the application should proceed.
After Close has been called, regardless of its result, the object
may not be used again.
Release the resources held by this object, and close the cursor if
it's still open.
Returns an enumerator that iterates through the
.
The enumerator will begin at the cursor's current position (or the
first record if the cursor has not yet been positioned) and iterate
forwards (i.e. in the direction of ) over the
remaining records.
An enumerator for the Cursor.
Iterate over the values associated with the keys to which each
passed to
was initialized. Any data value that appears in all
s is then used as a key into the
primary, and the key/data pair found in the primary is stored in
.
True if the cursor was positioned successfully, false otherwise.
Iterate over the values associated with the keys to which each
passed to
was initialized. Any data value that appears in all
s is then used as a key into the
primary, and the key/data pair found in the primary is stored in
.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Iterate over the values associated with the keys to which each
passed to
was initialized. Any data value that appears in all
s is then stored in
Current.Key.
Current.Value will contain an empty
.
True if the cursor was positioned successfully, false otherwise.
Iterate over the values associated with the keys to which each
passed to
was initialized. Any data value that appears in all
s is then stored in
Current.Key.
Current.Value will contain an empty
.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
The key/data pair at which the cursor currently points.
A class representing the supported Berkeley DB access methods.
BTree access method
Hash access method
Recno access method
Queue access method
Unknown access method
Convert this instance of DatabaseType to its string representation.
A string representation of this instance.
Statistical information about the locking subsystem
Last allocated locker ID.
Lock conflicts w/ subsequent wait
Lock conflicts w/o subsequent wait
Number of lock deadlocks.
Number of lock downgrades.
Number of lock modes.
Number of lock puts.
Number of lock gets.
Number of lock steals so far.
Lock timeout.
Number of lock timeouts.
Number of lock upgrades.
Locker lock granted without wait.
Locker lock granted after wait.
Current number of lockers.
Current number of locks.
Max length of bucket.
Maximum number steals in any partition.
Maximum number of lockers so far.
Maximum num of lockers in table.
Maximum number of locks so far.
Maximum number of locks in any bucket.
Maximum number of locks in table.
Maximum number of steals in any partition.
Maximum number of objects so far.
Maximum number of objectsin any bucket.
Max partition lock granted without wait.
Max partition lock granted after wait.
Current maximum unused ID.
Maximum num of objects in table.
number of partitions.
Object lock granted without wait.
Number of objects steals so far.
Object lock granted after wait.
Current number of objects.
Partition lock granted without wait.
Partition lock granted after wait.
Region lock granted without wait.
Region size.
Region lock granted after wait.
Transaction timeout.
Number of transaction timeouts.
Enable specific additional informational and debugging messages.
Display additional information when doing deadlock detection.
Display additional information when performing filesystem operations
such as open, close or rename. May not be available on all
platforms.
Display additional information when performing all filesystem
operations, including read and write. May not be available on all
platforms.
Display additional information when performing recovery.
Display additional information concerning support for
Display all detailed information about replication. This includes
the information displayed by all of the other Replication* and
RepMgr* values.
Display detailed information about Replication Manager connection
failures.
Display detailed information about general Replication Manager
processing.
Display detailed information about replication elections.
Display detailed information about replication master leases.
Display detailed information about general replication processing
not covered by the other Replication* values.
Display detailed information about replication message processing.
Display detailed information about replication client
synchronization.
Display the waits-for table when doing deadlock detection.
A class representing database cursors over secondary indexes, which
allow for traversal of database records.
Protected method wrapping DBC->pget()
The secondary key
The primary key
The primary data
Flags to pass to DBC->pget
Locking parameters
Delete the key/data pair to which the cursor refers from the primary
database and all secondary indices.
The cursor position is unchanged after a delete, and subsequent
calls to cursor functions expecting the cursor to refer to an
existing key will fail.
The element has already been deleted.
Create a new cursor that uses the same transaction and locker ID as
the original cursor.
This is useful when an application is using locking and requires two
or more cursors in the same thread of control.
If true, the newly created cursor is initialized to refer to the
same position in the database as the original cursor (if any) and
hold the same locks (if any). If false, or the original cursor does
not hold a database position and locks, the created cursor is
uninitialized and will behave like a cursor newly created by
.
A newly created cursor
Returns an enumerator that iterates through the
.
The enumerator will begin at the cursor's current position (or the
first record if the cursor has not yet been positioned) and iterate
forwards (i.e. in the direction of ) over the
remaining records.
An enumerator for the SecondaryCursor.
Set the cursor to refer to the first key/data pair of the database,
and store the secondary key along with the corresponding primary
key/data pair in . If the first key has
duplicate values, the first data item in the set of duplicates is
stored in .
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to the first key/data pair of the database,
and store the secondary key along with the corresponding primary
key/data pair in . If the first key has
duplicate values, the first data item in the set of duplicates is
stored in .
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to , and store the
primary key/data pair associated with the given secondary key in
. In the presence of duplicate key values, the
first data item in the set of duplicates is stored in
.
If positioning the cursor fails, will contain
an empty .
The key at which to position the cursor
If true, require the given key to match the key in the database
exactly. If false, position the cursor at the smallest key greater
than or equal to the specified key, permitting partial key matches
and range searches.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to , and store the
primary key/data pair associated with the given secondary key in
. In the presence of duplicate key values, the
first data item in the set of duplicates is stored in
.
If positioning the cursor fails, will contain
an empty .
The key at which to position the cursor
If true, require the given key to match the key in the database
exactly. If false, position the cursor at the smallest key greater
than or equal to the specified key, permitting partial key matches
and range searches.
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Move the cursor to the specified key/data pair of the database. The
cursor is positioned to a key/data pair if both the key and data
match the values provided on the key and data parameters.
If positioning the cursor fails, will contain
an empty .
If this flag is specified on a database configured without sorted
duplicate support, the value of is ignored.
The key/data pair at which to position the cursor.
If true, require the given key and data to match the key and data
in the database exactly. If false, position the cursor at the
smallest data value which is greater than or equal to the value
provided by (as determined by the
comparison function).
True if the cursor was positioned successfully, false otherwise.
Move the cursor to the specified key/data pair of the database. The
cursor is positioned to a key/data pair if both the key and data
match the values provided on the key and data parameters.
If positioning the cursor fails, will contain
an empty .
If this flag is specified on a database configured without sorted
duplicate support, the value of is ignored.
The key/data pair at which to position the cursor.
If true, require the given key and data to match the key and data
in the database exactly. If false, position the cursor at the
smallest data value which is greater than or equal to the value
provided by (as determined by the
comparison function).
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to the last key/data pair of the database,
and store the secondary key and primary key/data pair in
. If the last key has duplicate values, the
last data item in the set of duplicates is stored in
.
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
Set the cursor to refer to the last key/data pair of the database,
and store the secondary key and primary key/data pair in
. If the last key has duplicate values, the
last data item in the set of duplicates is stored in
.
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNext is identical to
. Otherwise, move the cursor to the next
key/data pair of the database, and store the secondary key and
primary key/data pair in . In the presence of
duplicate key values, the value of Current.Key
may not change.
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNext is identical to
. Otherwise, move the cursor to the next
key/data pair of the database, and store the secondary key and
primary key/data pair in . In the presence of
duplicate key values, the value of Current.Key
may not change.
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the next key/data pair of the database is a duplicate data record
for the current key/data pair, move the cursor to the next key/data
pair in the database, and store the secondary key and primary
key/data pair in . MoveNextDuplicate will
return false if the next key/data pair of the database is not a
duplicate data record for the current key/data pair.
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
If the next key/data pair of the database is a duplicate data record
for the current key/data pair, move the cursor to the next key/data
pair in the database, and store the secondary key and primary
key/data pair in . MoveNextDuplicate will
return false if the next key/data pair of the database is not a
duplicate data record for the current key/data pair.
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextUnique is identical to
. Otherwise, move the cursor to the next
non-duplicate key in the database, and store the secondary key and
primary key/data pair in . MoveNextUnique will
return false if no non-duplicate key/data pairs exist after the
cursor position in the database.
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MoveNextUnique is identical to
. Otherwise, move the cursor to the next
non-duplicate key in the database, and store the secondary key and
primary key/data pair in . MoveNextUnique will
return false if no non-duplicate key/data pairs exist after the
cursor position in the database.
If the database is a Queue or Recno database, MoveNextUnique will
ignore any keys that exist but were never explicitly created by the
application, or those that were created and later deleted.
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MovePrev is identical to
. Otherwise, move the cursor to the previous
key/data pair of the database, and store the secondary key and
primary key/data pair in . In the presence of
duplicate key values, the value of Current.Key
may not change.
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MovePrev is identical to
. Otherwise, move the cursor to
the previous key/data pair of the database, and store the secondary
key and primary key/data pair in . In the
presence of duplicate key values, the value of
Current.Key may not change.
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the previous key/data pair of the database is a duplicate data
record for the current key/data pair, the cursor is moved to the
previous key/data pair of the database, and the secondary key and
primary key/data pair in . MovePrevDuplicate
will return false if the previous key/data pair of the database is
not a duplicate data record for the current key/data pair.
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
If the previous key/data pair of the database is a duplicate data
record for the current key/data pair, the cursor is moved to the
previous key/data pair of the database, and the secondary key and
primary key/data pair in . MovePrevDuplicate
will return false if the previous key/data pair of the database is
not a duplicate data record for the current key/data pair.
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MovePrevUnique is identical to
. Otherwise, move the cursor to the previous
non-duplicate key in the database, and store the secondary key and
primary key/data pair in . MovePrevUnique will
return false if no non-duplicate key/data pairs exist after the
cursor position in the database.
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
If the cursor is not yet initialized, MovePrevUnique is identical to
. Otherwise, move the cursor to
the previous non-duplicate key in the database, and store the
secondary key and primary key/data pair in .
MovePrevUnique will return false if no non-duplicate key/data pairs
exist after the cursor position in the database.
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
Store the secondary key and primary key/data pair to which the
cursor refers in .
If positioning the cursor fails, will contain
an empty .
True if the cursor was positioned successfully, false otherwise.
Store the secondary key and primary key/data pair to which the
cursor refers in .
If positioning the cursor fails, will contain
an empty .
The locking behavior to use.
True if the cursor was positioned successfully, false otherwise.
The secondary key and primary key/data pair at which the cursor
currently points.
Statistical information about a RecnoDatabase
Magic number.
Version number.
Metadata flags.
Number of unique keys.
Number of data items.
Page count.
Page size.
Minkey value.
Fixed-length record length.
Fixed-length record pad.
Tree levels.
Internal pages.
Leaf pages.
Duplicate pages.
Overflow pages.
Empty pages.
Pages on the free list.
Bytes free in internal pages.
Bytes free in leaf pages.
Bytes free in duplicate pages.
Bytes free in overflow pages.
A class for traversing the records of a
Create a new cursor that uses the same transaction and locker ID as
the original cursor.
This is useful when an application is using locking and requires two
or more cursors in the same thread of control.
If true, the newly created cursor is initialized to refer to the
same position in the database as the original cursor (if any) and
hold the same locks (if any). If false, or the original cursor does
not hold a database position and locks, the created cursor is
uninitialized and will behave like a cursor newly created by
.
A newly created cursor
Position the cursor at a specific key/data pair in the database, and
store the key/data pair in .
The specific numbered record of the database at which to position
the cursor.
True if the cursor was positioned successfully, false otherwise.
Position the cursor at a specific key/data pair in the database, and
store the key/data pair in .
The specific numbered record of the database at which to position
the cursor.
The locking behavior to use
True if the cursor was positioned successfully, false otherwise.
Position the cursor at a specific key/data pair in the database, and
store the key/data pair and as many duplicate data items that can
fit in a buffer the size of one database page in
.
The specific numbered record of the database at which to position
the cursor.
True if the cursor was positioned successfully, false otherwise.
Position the cursor at a specific key/data pair in the database, and
store the key/data pair and as many duplicate data items that can
fit in a buffer the size of one database page in
.
The specific numbered record of the database at which to position
the cursor.
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
True if the cursor was positioned successfully, false otherwise.
Position the cursor at a specific key/data pair in the database, and
store the key/data pair and as many duplicate data items that can
fit in a buffer the size of one database page in
.
The specific numbered record of the database at which to position
the cursor.
The locking behavior to use
True if the cursor was positioned successfully, false otherwise.
Position the cursor at a specific key/data pair in the database, and
store the key/data pair and as many duplicate data items that can
fit in a buffer the size of one database page in
.
The specific numbered record of the database at which to position
the cursor.
The size of a buffer to fill with duplicate data items. Must be at
least the page size of the underlying database and be a multiple of
1024.
The locking behavior to use
True if the cursor was positioned successfully, false otherwise.
Position the cursor at a specific key/data pair in the database, and
store the key/data pair and as many ensuing key/data pairs that can
fit in a buffer the size of one database page in
.
The specific numbered record of the database at which to position
the cursor.
True if the cursor was positioned successfully, false otherwise.
Position the cursor at a specific key/data pair in the database, and
store the key/data pair and as many ensuing key/data pairs that can
fit in a buffer the size of one database page in
.
The specific numbered record of the database at which to position
the cursor.
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
True if the cursor was positioned successfully, false otherwise.
Position the cursor at a specific key/data pair in the database, and
store the key/data pair and as many ensuing key/data pairs that can
fit in a buffer the size of one database page in
.
The specific numbered record of the database at which to position
the cursor.
The locking behavior to use
True if the cursor was positioned successfully, false otherwise.
Position the cursor at a specific key/data pair in the database, and
store the key/data pair and as many ensuing key/data pairs that can
fit in a buffer the size of one database page in
.
The specific numbered record of the database at which to position
the cursor.
The size of a buffer to fill with key/data pairs. Must be at least
the page size of the underlying database and be a multiple of 1024.
The locking behavior to use
True if the cursor was positioned successfully, false otherwise.
Return the record number associated with the cursor's current
position.
The record number associated with the cursor.
Return the record number associated with the cursor's current
position.
The locking behavior to use
The record number associated with the cursor.
Insert the data element as a duplicate element of the key to which
the cursor refers.
The data element to insert
Specify whether to insert the data item immediately before or
immediately after the cursor's current position.
Insert the specified key/data pair into the database, unless a
key/data pair comparing equally to it already exists in the
database.
The key/data pair to be inserted
Thrown if a matching key/data pair already exists in the database.
Insert the specified key/data pair into the database.
The key/data pair to be inserted
If the key already exists in the database and no duplicate sort
function has been specified, specify whether the inserted data item
is added as the first or the last of the data items for that key.
A class representing a RecnoDatabase. The Recno format supports fixed-
or variable-length records, accessed sequentially or by logical record
number, and optionally backed by a flat text file.
Instantiate a new RecnoDatabase object and open the database
represented by .
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
A new, open database object
Instantiate a new RecnoDatabase object and open the database
represented by and
.
If both and
are null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe. If
is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is set, the operation
will be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
A new, open database object
Instantiate a new RecnoDatabase object and open the database
represented by .
If is null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
Instantiate a new RecnoDatabase object and open the database
represented by and
.
If both and
are null, the database is strictly
temporary and cannot be opened by any other thread of control, thus
the database can only be accessed by sharing the single database
object that created it, in circumstances where doing so is safe. If
is null and
is non-null, the database can be
opened by other threads of control and will be replicated to client
sites in any replication group.
If is null, but
is set, the operation will
be implicitly transaction protected. Note that transactionally
protected operations on a datbase object requires the object itself
be transactionally protected during its open. Also note that the
transaction must be committed before the object is closed.
The name of an underlying file that will be used to back the
database. In-memory databases never intended to be preserved on disk
may be created by setting this parameter to null.
This parameter allows applications to have multiple databases in a
single file. Although no DatabaseName needs to be specified, it is
an error to attempt to open a second database in a file that was not
initially created using a database name.
The database's configuration
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
A new, open database object
Append the data item to the end of the database.
The data item to store in the database
The record number allocated to the record
Append the data item to the end of the database.
There is a minor behavioral difference between
and
. If a transaction enclosing an
Append operation aborts, the record number may be reallocated in a
subsequent operation, but it will
not be reallocated in a subsequent
operation.
The data item to store in the database
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The record number allocated to the record
Compact the database, and optionally return unused database pages to
the underlying filesystem.
If the operation occurs in a transactional database, the operation
will be implicitly transaction protected using multiple
transactions. These transactions will be periodically committed to
avoid locking large sections of the tree. Any deadlocks encountered
cause the compaction operation to be retried from the point of the
last transaction commit.
Compact configuration parameters
Compact operation statistics
Compact the database, and optionally return unused database pages to
the underlying filesystem.
If is non-null, then the operation is
performed using that transaction. In this event, large sections of
the tree may be locked during the course of the transaction.
If is null, but the operation occurs in a
transactional database, the operation will be implicitly transaction
protected using multiple transactions. These transactions will be
periodically committed to avoid locking large sections of the tree.
Any deadlocks encountered cause the compaction operation to be
retried from the point of the last transaction commit.
Compact configuration parameters
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
Compact operation statistics
Create a database cursor.
A newly created cursor
Create a database cursor with the given configuration.
The configuration properties for the cursor.
A newly created cursor
Create a transactionally protected database cursor.
The transaction context in which the cursor may be used.
A newly created cursor
Create a transactionally protected database cursor with the given
configuration.
The configuration properties for the cursor.
The transaction context in which the cursor may be used.
A newly created cursor
Return the database statistical information which does not require
traversal of the database.
The database statistical information which does not require
traversal of the database.
Return the database statistical information which does not require
traversal of the database.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The database statistical information which does not require
traversal of the database.
Return the database statistical information which does not require
traversal of the database.
Among other things, this method makes it possible for applications
to request key and record counts without incurring the performance
penalty of traversing the entire database.
The statistical information is described by the
, ,
, and classes.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The level of isolation for database reads.
will be silently ignored for
databases which did not specify
.
The database statistical information which does not require
traversal of the database.
Return the database statistical information for this database.
Database statistical information.
Return the database statistical information for this database.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
Database statistical information.
Return the database statistical information for this database.
The statistical information is described by
.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The level of isolation for database reads.
will be silently ignored for
databases which did not specify
.
Database statistical information.
Return pages to the filesystem that are already free and at the end
of the file.
The number of database pages returned to the filesystem
Return pages to the filesystem that are already free and at the end
of the file.
If the operation is part of an application-specified transaction,
is a Transaction object returned from
; if
the operation is part of a Berkeley DB Concurrent Data Store group,
is a handle returned from
; otherwise null.
The number of database pages returned to the filesystem
A function to call after the record number has been selected but
before the data has been stored into the database.
When using , it may be useful to
modify the stored data based on the generated key. If a delegate is
specified, it will be called after the record number has been
selected, but before the data has been stored.
The delimiting byte used to mark the end of a record in
.
If using fixed-length, not byte-delimited records, the length of the
records.
The padding character for short, fixed-length records.
If true, the logical record numbers are mutable, and change as
records are added to and deleted from the database.
If true, any file will be read in its
entirety when is called. If false,
may be read lazily.
The underlying source file for the Recno access method.
A class representing the locking options for Berkeley DB operations.
The isolation degree of the operation.
If true, acquire write locks instead of read locks when doing a
read, if locking is configured.
Setting ReadModifyWrite can eliminate deadlock during a
read-modify-write cycle by acquiring the write lock during the read
part of the cycle so that another thread of control acquiring a read
lock for the same item, in its own read-modify-write cycle, will not
result in deadlock.
Instantiate a new LockingInfo object
A class representing a replication site used by Replication Manager
Environment ID assigned by the replication manager. This is the same
value that is passed to
for the
event.
The address of the site
If true, the site is connected.