The accuracy of the floating-point numbers representation in two 16 bit words of a computer is approximately

The accuracy of the floating-point numbers representation in two 16 bit words of a computer is approximately
A: 16 digits
B: 6 digits
C: 4 digits
D: 2 digits

In a two 16-bit word floating-point representation; accuracy is approximately 6 decimal digits. This reflects the precision limitations of binary storage systems. Modern systems often use 32-bit or 64-bit floating-point formats for higher accuracy.