Cross-Layer Reliability Evaluation and Efficient Hardening of Large Vision Transformers Models - INRIA - Institut National de Recherche en Informatique et en Automatique
Communication Dans Un Congrès Année : 2024

Cross-Layer Reliability Evaluation and Efficient Hardening of Large Vision Transformers Models

Résumé

Vision Transformers (ViTs) are highly accurate Machine Learning (ML) models. However, their large size and complexity increase the expected error rate due to hardware faults. Measuring the error rate of large ViT models is challenging, as conventional microarchitectural fault simulations can take years to produce statistically significant data. This paper proposes a two-level evaluation based on data collected through more than 70 hours of neutron beam experiments and more than 600 hours of software fault simulation. We consider 12 ViT models executed in 2 NVIDIA GPU architectures. We first characterize the fault model in ViT's kernels to identify the faults more likely to propagate to the output. We then design dedicated procedures efficiently integrated into the ViT to locate and correct these faults. We propose Maximum corrupted values (MaxiMals), an experimentally tuned low-cost mitigation solution to reduce the impact of transient faults on ViTs. We demonstrate that MaxiMals can correct 90.7% of critical failures, with execution time overheads as low as 5.61%.
Fichier principal
Vignette du fichier
dac_2024_vits.pdf (12.39 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04456702 , version 1 (27-02-2024)

Licence

Identifiants

  • HAL Id : hal-04456702 , version 1

Citer

Lucas Roquet, Fernando Fernandes dos Santos, Paolo Rech, Marcello Traiola, Olivier Sentieys, et al.. Cross-Layer Reliability Evaluation and Efficient Hardening of Large Vision Transformers Models. Design Automation Conference (DAC), Jun 2024, San Francisco, United States. ⟨hal-04456702v1⟩
515 Consultations
139 Téléchargements

Partager

More