[NLPL Task Force (A)] CuDNN for fp16 training
Vinit Ravishankar
vinitr at ifi.uio.no
Fri May 8 16:09:05 UTC 2020
Hi! I’m trying to figure out how to enable half-precision floating points in Python; I’m using the fairseq library [1], which has an fp16 flag, in conjunction with my own virtual environment (Python 3.7.3). I’m not using any modules, I haven’t needed them for regular multi-GPU work. Unfortunately, my program switches to full-width floats because of a lack of support for fp16. This support was introduced in nVidia’s cuDNN, but loading any of the provided cuDNN modules results in a core dump the minute fairseq is loaded. Is there any recommended way to use these modules? Thanks!
– Vinit
1. https://github.com/pytorch/fairseq
More information about the infrastructure
mailing list