User Tools

Site Tools

Navigation:  ISAM File Drivers > TopSpeed Database Driver > More Details >====== TopSpeed:Inside ====== Previous pageReturn to chapter overviewNext page


The TopSpeed driver posts an ERRORCODE of 90 for unexpected runtime errors. At the same time, the driver posts a FILEERRORCODE (the former TPSBT error code) that helps us diagnose the problem. This error handling gives you more control over runtime errors and provides us with more information. That is, your program can trap for ERRORCODE=90 and react accordingly.

Should you receive an ERRORCODE of 90 from the TopSpeed driver, we want to know about it. Please send us a copy of the file and the corresponding FILEERRORCODE value.

Large Keys (or small RAM)

APPEND() is recommended over ADD() if the total size of the keys exceeds the amount of RAM available, if there is more than one key, or when adding a large number of records. The size of a key (for this purpose) is the number of entries times (the sum of key fields + 10 bytes). If the records being added are already in an approximate key order, then you can discount that key for the purposes of the above calculation.

As an example, if a file has two 40 byte keys and 2 Megabytes of RAM are available, then ADD() becomes (relatively) slow when the database size exceeds about 2,000,000 / (40 + 10 + 40 + 10) = 20,000 records.

Incremental Key/Index Build

The TopSpeed driver implements incremental building; this means that building a key only reads records starting from the first record appended since the key was last built. The driver merges the new keys with the existing key. Thus building a large key where only a few recently added records have been modified should be fast. See the FULLBUILD driver string above.

Building an index is similar, but must start at the minimum physical record whose position in the index has changed since the index was last built.

Dynamic indexes are not retained, so cannot be built incrementally.

Batch Processing Performance

When writing a large number of records, use STREAM() or open the file in a deny write mode, that is, OPEN(file) rather than SHARE(file). After the records have been written, call FLUSH() to allow other users access.

It is very important to use STREAM() when ADDing/APPENDing/PUTting a large number of records. STREAM() will typically make processing about 20 times faster. For example, adding 1000 records might take nearly 2 minutes without STREAM(), but only 5 seconds with STREAM.

It is not necessary to use STREAM() or FLUSH() on a logged out file (performance on logged out files is always good).

STREAM has the effect of LOCKing the file.

POINTERs and Deleted Records

POINTER(key) returns the relative position of the record within the file. Consequently when that record is DELETEd, the pointer becomes invalid. Any subsequent access using the pointer fails. If you require fuzzy matching whereby the nearest record is returned, use the POSITION() function.

Data Compression–STRINGs v MEMOs

The TopSpeed driver compresses the entire record buffer area (not individual fields within the record), therefore, compression gains can be realized by placing similar fields adjacent to each other in the FILE declaration.

The TopSpeed file system uses the same compression algorithm for STRINGs and MEMOs; however, the compression occurs at a “higher level” for MEMOs than for STRINGs. As a result, MEMOs do have a disk space advantage over large STRINGs (over 500 bytes) and smaller STRINGS can have a slight performance advantage over MEMOs. The larger the STRING, the greater the advantage.

MEMOs do carry the advantage of BINARY versus NONBINARY, plus MEMOs may be omitted from all processing with the NOMEMO statement.

STRINGs are always allocated space (RAM) within the record buffer, whereas MEMOs are only allocated space when the file is OPENed. Also MEMOs cannot be key components.

Estimating File Size

The TopSpeed file driver compresses data and key information, so the ultimate file size depends on the “compressibility” of the data and keys. In the worst case (data and keys cannot be compressed because there is no repeating information) the file size may be estimated as:

(RecordSize + All Key components) * Records + Fixed Overhead

In a more realistic case (data and keys are compressible), the file size may be estimated as:

1) * Records + Fixed Overhead

Note that Fixed Overhead varies depending on your file definition. Fixed overhead includes about 800 bytes for the driver, plus the header information describing the fields and keys for the file. The more fields and keys, and the longer the names, the higher the fixed overhead. A rough rule of thumb for calculating fixed overhead is 800 bytes + 40 bytes for each field and key. For Example:

File Description Estimated Fixed Overhead
1 field, no keys 1KB
20 fields, 10 keys 2KB
200 fields, 10 keys 9KB

Concurrent User Limit

The TopSpeed driver limits concurrent users to 1024 per file; additional users would have to wait momentarily until a slot opens up. Practically speaking the driver is very unlikely to reach this limit since very few networks and servers will support this many concurrent users. Generally, we recommend a client/server file system for more than 30 concurrent users.

Transaction Processing–the TCF File

Speedy Logging and Automatic Recovery

TopSpeed transaction logging is very fast (about 100 times faster than the Clarion driver). With LOGOUT, the TopSpeed engine posts all transactions to memory. ROLLBACK simply frees the memory, while COMMIT writes out the database changes in a stream.

If a system crashes during a transaction (LOGOUT–COMMIT), the recovery is automatically handled by the TopSpeed driver the next time the affected file is accessed.

The Transaction Control File

The transaction control file (.TCF) is used to ensure that changes to more than one file, which are grouped into a transaction, either all happen or none happen. By default the transaction control file has the name “TOPSPEED.TCF.” The TCF driver string lets you change this. See TCF for more information.

When any workstation finds a file which is in a partially committed state, and which was involved in a multi-file transaction, it needs to access the TCF file to decide what to do. The TCF file provides “atomicity”–a single (boolean) storage location which inndicates if a multi-file transaction committed or not.

Note that the .TCF file contains very little information; it just serves to coordinate multi-file commits. The actual rollback/commit data is stored in the data (.TPS) files.

How TopSpeed Transaction Logging Works

LOGOUT gives each transaction a unique ID which it stores in the .TCF file. LOGOUT also stores the .TCF file name and transaction ID in each data (.TPS) file which is updated, so that after a crash, the next time the file is opened the TopSpeed driver can find the .TCF file and do any necessary recovery. COMMIT removes the unique transaction id from the .TCF file.

To be effective, the .TCF file must be accessible when any files controlled by it are accessed. Therefore, you generally should not delete or move .TCF files. If a transaction updates network files, you should specify a transaction control file on the network.

Any application that use TPS files should have only one TCF file in one location. Any application that accesses those files should use the same TCF. One of the first things that needs to be done in any application that expects multiple programs to access the same TPS files at the same time is to set the TCF in a common location so all the programs will use the same TCF.

Using different TCF files WILL result in TPS corruptions. The consequence of there being several TCF files with various levels of accessibility (or of a deleted or overwritten TCF file) is that some of the files within a transaction might be updated and others left unchanged.


To avoid corruption on TPS on the network use

SEND(file, 'TCF [ = filename ]' )

'and set the TCF name to a common place on the network.

Storing Multiple Tables in a single .TPS File

By using the characters '\!' in the NAME() attribute of a TopSpeed file declaration, you can specify that a single .TPS file will hold more than one table. For example, to declare a single .TPS file 's&p.tps' that contains 3 tables called supp, part and ship:

Supp FILE,DRIVER('TopSpeed'),PRE(Supp),CREATE,NAME('S&P\!Supp')


Part FILE,DRIVER('TopSpeed'),PRE(Part),CREATE,NAME('S&P\!Part')


Ship FILE,DRIVER('TopSpeed'),PRE(Ship),CREATE,NAME('S&P\!Ship')


Access and Sharing Modes

The TopSpeed driver optimizes the use of file handles, so that only one file handle is used per DOS file per thread. The access mode for that handle is set to the least restrictive of any open mode used for that file handle. The sharing mode is set to the most restrictive for sharing.

For example:

FileA FILE,DRIVER('TopSpeed'),NAME('TPSFile\!SubTable')


FileB FILE,DRIVER('TopSpeed'),NAME('TPSFile\!SubTable')


FileC FILE,DRIVER('TopSpeed'),NAME('TPSFile\!SubTable2')


FileD FILE,DRIVER('TopSpeed'),NAME('TPSFile\!SubTable2')



 OPEN(FileA, 10H)  ! Read Only - Deny None

 OPEN(FileB), 12H) ! Read/Write - Deny None

 ! Due to access promotion FileA now has Read/Write access

 OPEN(fileC, 22H) ! Read Only / Deny All

 ! Due to sharing restriction all files now have Deny All sharing

 OPEN(fileD, 22H) ! Read Only / Deny All

 ! This will succeed even though the file handle has sharing

 ! set to deny all because of the file handle sharing

 ! At this stage all four files are open in Read/Write - Deny All

CLOSE(File) does not reset access modes, so given the above example doing:





fileA is still open with access mode Read/Write - Deny All

Similarly, if one of the tables in the file is logged out, then all the tables are effectively logged out. If one table in the file is flushed, then all the tables are flushed.

This feature is especially useful when there are a large number of small tables, or when the application must normally access several related tables at once.


When using a variable to hold the name of the multi-table element, you can browse the file in the Dictionary Editor by selecting the Browse tablename option, selecting the core file name, and appending the specific table name to the mutil-file name before loading. Example: invoice.tps\!customer where invoice.tps is the file name stored on disk.

You can retrive the names of tables within the .TPS files with the SEND() command. To retrieve the first name, issue:


This returns the name of the first table. To retrieve the second name, issue:


This returns the name of the second table, and so on.

You can also rename the tables; for example, given the above declarations the following command renames the table called Supp to Old_Supp:


If you use the OWNER attribute on multiple tables in a single .TPS file, all the tables must have the same OWNER attribute.

If you don't specify a table name, the table is called 'unnamed', so that the following are all equivalent:

myname     FILE,DRIVER('TopSpeed')

myname     FILE,DRIVER('TopSpeed'),NAME('myname')

myname     FILE,DRIVER('TopSpeed'),NAME('myname\!unnamed')

Collating Sequences

Changing Collating Sequence

Changing the collation sequence on a Clarion 2.003 or earlier TopSpeed file (by changing .ENV file or OEM flag) corrupts the file.

This is no longer true, because the collating seqence for the file is now stored within the file. This change is fully backward compatible. Old files continue to work as before and new files are accessible by older programs.

To add the collating sequence information to an existing file, simply do a full build on the file:



The collating sequence for a TopSpeed file is established when the table is created or a full build is performed. Therefore the OEM flag is only significant at the creation of the file or on a full build.

Any application that uses an incorrect sequence (due to an incompatible .ENV file) to access a file may get unpredictable results, but will not corrupt the data.

Accessing TopSpeed files with Access Jet and ODBC

Occasionally, the Access Jet engine returns “#deleted” for each requested field. This is a known bug in Microsoft Access. The default query used by Access does an internal comparison of the record set to determine if a record has been deleted or modified from the database. The mechanism is known to work poorly for certain data types, notably DECIMAL.

Microsoft recommends using an SQL pass-through query as a work around to this problem. To create a SQL pass-through query:

1.Choose SQL Specific, Pass Through from the Query menu in query design mode. For Access 7, Select Query, and press the New button.

2.Accept the default of Design View.

3.Close the Show Table Window.

4.Select SQL Specific from the Query menu and select the Pass-Through option.

5.Enter the SQL statement.

The Query can be saved for future use. This method will correct virtually all display problems, but the resulting grid is not updatable. Updates must be performed using an Update Query when SQL Passthrough is used.

size of all string fields)/(compressibility factor) + size of all binary fields + size of all binary key components + (4 * number of string key components
topspeed_other.htm.txt · Last modified: 2021/04/15 15:57 by