Exploring Floating-Point Trade-Offs in ML

Share

Rocco Salvia, Zvonimir Rakamaric. Workshop on Approximate Computing Across the Stack (WAX 2018), Williamsburg, VA, USA.
[pdf] [bib]

Abstract: Perceptron and Support Vector Machine (SVM) algorithms are two well-known and widely used linear predictors. They compute a hypothesis function using supervised learning to predict labels of unknown future samples. Both training and testing procedures are typically implemented using double precision floating-points to minimize the error, which often results in overly conservative implementations that waste runtime and/or energy. In this work, we empirically analyze the impact of floating-point precision on these predictors. We assess whether the precision of reading the dataset, training, or testing is the most critical for the overall accuracy. Our analysis in particular focuses on very small floating-point bit-widths (i.e., only several bits of precision), and compares these against the well-known and widely used single and double precision types.

Bibtex:

@inproceedings{wax2018-sr,
  author = {Rocco Salvia and Zvonimir Rakamari\'c},
  title = {Exploring Floating-Point Trade-Offs in {ML}},
  booktitle = {Informal Proceedings of the Workshop on Approximate Computing Across the
    Stack (WAX)},
  year = {2018},
  note = {Position paper},
}