| **Navigation:**  [[introduction.htm|Language Reference]] > 11 - Assignments > Data Type Conversion Rules >====== Base Types ====== | [[data type conversion rules.htm|{{btn_prev_n.gif|Previous page}}]][[introduction.htm|{{btn_home_n.gif|Return to chapter overview}}]][[bcd operations and procedures.htm|{{btn_next_n.gif|Next page}}]] | | || To facilitate this automatic data type conversion, Clarion internally uses four Base Types to which all data items are automatically converted when any operation is performed on the data. These types are: STRING, LONG, DECIMAL, and REAL.These are all standard Clarion data types. The STRING Base Type is used as the intermediate type for all string operations. The LONG, DECIMAL, and REAL Base Types are used in all arithmetic operations. Which numeric type is used, and when, is determined by the original data types of the operands and the type of operation being performed on them. The "normal" Base Type for each data type is: ** **__**Base Type LONG:**__** ** **  BYTE ** **  SHORT ** **  USHORT ** **  LONG ** **  DATE ** **  TIME ** **  Integer Constants** **  Strings declared with @P pictures** ** **__**Base Type DECIMAL:**__ **  ULONG ** **  DECIMAL ** **  PDECIMAL ** **  STRING(@Nx.y) ** **  Decimal Constants** ** **__**Base Type REAL:**__ **  SREAL ** **  REAL ** **  BFLOAT4 ** **  BFLOAT8 ** **  STRING(@Ex.y) ** **  Scientific Notation Constants** **  Untyped (? and *?) Parameters** ** **__**Base Type STRING:**__** ** **  STRING** **  CSTRING ** **  PSTRING ** **  String Constants** DATE and TIME data types are first converted to Clarion Standard Date and Clarion Standard Time intermediate values and have a LONG Base Type for all operations. For the most part, Clarion's internal use of these Base Types is transparent to the programmer and do not require any consideration when planning applications. However, for business programming with numeric data containing fractional portions (currency, for instance), using data types that have the DECIMAL Base Type has some significant advantages over REAL Base Types. ·DECIMAL supports 31 significant digits of accuracy for data storage while REAL only supports 15. ·DECIMAL automatically rounds to the precision specified by the data declaration, while REAL can create rounding problems due to the transalation of decimal (base 10) numbers to binary (base 2) for processing by the CPU's Floating Point Unit (or Floating Point emulation software). ·On machines without a Floating Point Unit, DECIMAL is substantially faster than REAL. ·DECIMAL operations are closely linked with conventional (decimal) arithmetic.