Spar se linear algebra problems\, typically solved using iterative methods\, ar e ubiquitous throughout scientific and data analysis applications and are often the most expensive computations in large-scale codes due to the high cost of data movement. Approaches to improving the performance of iterati ve methods typically involve modifying or restructuring the algorithm to r educe or hide this cost. Such modifications can\, however\, result in dras tically different behavior in terms of convergence rate and accuracy. A cl ear\, thorough understanding of how inexact computations\, due to either f inite precision error or intentional approximation\, affect numerical beha vior is thus imperative in balancing the tradeoffs between accuracy\, conv ergence rate\, and performance in practical settings.In this talk\, we foc us on two general classes of iterative methods for solving linear systems: Krylov subspace methods and iterative refinement. We present bounds on th e attainable accuracy and convergence rate in finite precision s-step and pipelined Krylov subspace methods\, two popular variants designed for high performance. For s-step methods\, we demonstrate that our bounds on attai nable accuracy can lead to adaptive approaches that both achieve efficient parallel performance and ensure that a user-specified accuracy is attaine d. We present two such adaptive approaches\, including a residual replacem ent scheme and a variable s-step technique in which the parameter s is cho sen dynamically throughout the iterations. Motivated by the recent trend o f multiprecision capabilities in hardware\, we present new forward and bac kward error bounds for a general iterative refinement scheme using three p recisions. The analysis suggests that on architectures where half precisio n is implemented efficiently\, it is possible to solve certain linear syst ems up to twice as fast and to greater accuracy.As we push toward exascale level computing and beyond\, designing efficient\, accurate algorithms fo r emerging architectures and applications is of utmost importance. We disc uss extensions to machine learning and data analysis applications\, the de velopment of numerical autotuning tools\, and the broader challenge of und erstanding what increasingly large problem sizes will mean for finite prec ision computation both in theory and practice.

LOCATION:Mondi Seminar Room 2\, Central Building\, IST Austria ORGANIZER:pdelreal@ist.ac.at SUMMARY:Sparse Linear Algebra in the Exascale Era URL:https://talks-calendar.app.ist.ac.at/events/1174 END:VEVENT END:VCALENDAR