[NLPL Users] news from the NLPL underground

Stephan Oepen oe at ifi.uio.no
Wed Oct 3 20:58:24 CEST 2018


dear colleagues,

earlier today, you have been automatically subscribed to the mailing
list ‘users at nlpl.eu’, which is a low-traffic list intended to
distributed announcements of wider interest regarding the NLPL
infrastructure.  in case you have no good idea about the NLPL
initiative yet: in a nutshell, it is a collaborative nordic effort to
jointly build and maintain a productive software and data environment
for large-scale NLP research.  for more general background, please see

http://www.nlpl.eu

you have been subscribed to this list, because you currently have
access to NLPL cpu and storage allocations on either the Norwegian
Abel or Finnish Taito superclusters (or both; or because you are part
of the NLPL project team).  if you very strongly feel that you would
absolutely rather not be subscribed to this mailing list, please
contact ‘infrastructure at nlpl.eu’.  list membership is automatically
determined once a day, based on associations to allocations on Abel
and Taito, so it would have no lasting effect for users to simply
self-unsubscribe from the list.

besides wanting to explain your newly acquired subscription to this
mailing list, i am writing to point you to some recent developments
under the NLPL umbrella and invite your inputs and feedback.  to limit
traffic to this fairly large group of people (currently some 50
users), we suggest that you direct all comments to the NLPL
infrastructure task force (bjørn lindi, martin matthiesen, jörg
tiedemann, and myself): ‘infrastructure at nlpl.eu’.

we have recently created trial installations of PyTorch, TensorFlow,
and OpenNMT-py on both Abel and Taito.  these installations support
cpu as well as gpu nodes and should behave parallel across the two
systems.  we would like to invite you to start using these
environments and let us know what works well and, in particular, what
does not.  for basic instructions, please see:

http://wiki.nlpl.eu/index.php/Infrastructure/software/catalogue
http://wiki.nlpl.eu/index.php/Infrastructure/software/pytorch
http://wiki.nlpl.eu/index.php/Infrastructure/software/tensorflow

in some cases, the NLPL installations of these frameworks complement
earlier installations by the general support teams for Abel and Taito
(USIT and CSC, respectively).  the main reason for us to provide our
own, NLPL-maintained installations of such general frameworks is to
increase parallelism across the two systems (and to resolve some
remaining limitations in earlier installations, e.g. uniform support
for cpu and gpu nodes).  however, it is of course possible that the
NLPL installations might lack some features or lack premium
performance, because parallelism across Abel and Taito may come at the
expense of system-specific optimization.

we would be especially grateful if those of you with user experience
for PyTorch or TensorFlow on Abel or Taito could compare running your
code against either the NLPL versus the earlier installations of these
environments.  please do not hesitate to let ‘infrastructure at nlpl.eu’
know about your experience from such contrastive experiments, both in
terms of usability and performance!

with thanks in advance, oe (for the NLPL infrastructure task force)



More information about the users mailing list