<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
<meta name="Generator" content="Microsoft Exchange Server">
<!-- converted from text --><style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
</head>
<body>
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
p.xmsonormal, li.xmsonormal, div.xmsonormal
{mso-style-name:x_msonormal;
margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
.MsoChpDefault
{mso-style-type:export-only;}
@page WordSection1
{size:8.5in 11.0in;
margin:70.85pt 56.7pt 70.85pt 56.7pt;}
div.WordSection1
{page:WordSection1;}
--></style>
<div class="WordSection1">
<p class="MsoNormal">Hi again,</p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">The equivalent Taito scripts are in /wrk/yvessche/onmt_test2/. For some reason, I was not able to get the translation to run with -gpu $CUDA_VISIBLE_DEVICES, but -gpu 0 worked in my test. I’ll have to test whether srun always assigns gpu
id 0 or if this has to be found by trial and error.</p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Would it make sense to include these scripts somewhere directly in the module?</p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Best,</p>
<p class="MsoNormal">Yves</p>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Scherrer, Yves<br>
<b>Sent:</b> Thursday, October 25, 2018 1:54:36 PM<br>
<b>To:</b> Stephan Oepen<br>
<b>Cc:</b> infrastructure<br>
<b>Subject:</b> RE: [NLPL Task Force (A)] OpenNMT installation for NLPL (on Abel)</font>
<div> </div>
</div>
<div>
<meta content="text/html; charset=utf-8">
<meta name="x_Generator" content="Microsoft Word 15 (filtered medium)">
<style>
<!--
@font-face
{font-family:"Cambria Math"}
@font-face
{font-family:Calibri}
p.x_MsoNormal, li.x_MsoNormal, div.x_MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif}
a:x_link, span.x_MsoHyperlink
{color:blue;
text-decoration:underline}
a:x_visited, span.x_MsoHyperlinkFollowed
{color:#954F72;
text-decoration:underline}
.x_MsoChpDefault
{}
@page WordSection1
{margin:70.85pt 56.7pt 70.85pt 56.7pt}
div.x_WordSection1
{}
-->
</style>
<div lang="EN-US" link="blue" vlink="#954F72">
<div class="x_WordSection1">
<p class="x_MsoNormal">Hi Stephan,</p>
<p class="x_MsoNormal"> </p>
<p class="x_MsoNormal">Sorry for my silence, I started working on it and later forgot about it…</p>
<p class="x_MsoNormal"> </p>
<p class="x_MsoNormal">I have set up two scripts on Abel that illustrate the use of OpenNMT-py. In /usit/abel/u1/yvessche/onmt_test2 you can find two scripts, train.sh and translate.sh (hopefully permissions are ok…). They simulate our WMT17 English-to-Finnish
translation system but use slightly smaller datasets. I have restricted training time to 6 hours in the script, which amounts to about 60000 training batches on Abel. With this, I get a BLEU score of 8.18 on the test data.</p>
<p class="x_MsoNormal"> </p>
<p class="x_MsoNormal">I have been running the same scripts on Taito too. On the faster Taito P100 nodes, 6 hours are sufficient to do the default amount of 100000 training batches, which increases the BLEU score to 9.73. However, I’m struggling a bit to get
the GPU translation to work. CPU translation works though, taking 10 minutes instead of 5 on the 1370-sentence test set. I’ll keep you updated on this issue.</p>
<p class="x_MsoNormal"> </p>
<p class="x_MsoNormal">Best,</p>
<p class="x_MsoNormal">Yves</p>
<p class="x_MsoNormal"> </p>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="x_divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> Stephan Oepen <oe@ifi.uio.no><br>
<b>Sent:</b> Wednesday, October 24, 2018 1:47:30 PM<br>
<b>To:</b> Scherrer, Yves<br>
<b>Cc:</b> infrastructure<br>
<b>Subject:</b> Re: [NLPL Task Force (A)] OpenNMT installation for NLPL (on Abel)</font>
<div> </div>
</div>
</div>
<font size="2"><span style="font-size:11pt;">
<div class="PlainText">hi again, yves,<br>
<br>
USIT (here at oslo) is asking for some NLP benchmarks, to run across<br>
different platforms, including new gpu hardware (by huawei). i was<br>
thinking that OpenNMT-py training might well be a suitable benchmark.<br>
would you have a chance to point me to an example script or two,<br>
ideally training on data from the NLPL project directory (but that is<br>
not an absolute requirement)?<br>
<br>
thanks in advance, oe<br>
<br>
<br>
On Fri, Sep 28, 2018 at 3:58 PM Martin Matthiesen<br>
<martin.matthiesen@csc.fi> wrote:<br>
><br>
> Hi Yves,<br>
><br>
> I talked to Markus about virtualenv and he in turn told me that intelpython uses conda env for virtual environments. virtualenv should also work, you should be able to install it yourself via pip install --user virtualenv. I am not sure here what the right
level of support from our side should be. Should we consistently install virtualenv?<br>
><br>
> Regards,<br>
> Martin<br>
><br>
> --<br>
> Martin Matthiesen<br>
> CSC - Tieteen tietotekniikan keskus<br>
> CSC - IT Center for Science<br>
> PL 405, 02101 Espoo, Finland<br>
> +358 9 457 2376, martin.matthiesen@csc.fi<br>
> Public key : <a href="https://pgp.mit.edu/pks/lookup?op=get&search=0x74B12876FD890704">
https://pgp.mit.edu/pks/lookup?op=get&search=0x74B12876FD890704</a><br>
> Fingerprint: AA25 6F56 5C9A 8B42 009F BA70 74B1 2876 FD89 0704<br>
><br>
> ----- Original Message -----<br>
> > From: "Yves Scherrer" <yves.scherrer@helsinki.fi><br>
> > To: "Stephan Oepen" <oe@ifi.uio.no><br>
> > Cc: "Martin Matthiesen" <martin.matthiesen@csc.fi>, "infrastructure" <infrastructure@nlpl.eu><br>
> > Sent: Wednesday, 26 September, 2018 21:40:50<br>
> > Subject: Re: [NLPL Task Force (A)] OpenNMT installation for NLPL (on Abel)<br>
><br>
> > Further validating your installation, I am currently training a model, and once<br>
> > I found that I need to use $CUDA_VISIBLE_DEVICES it also seems to be training<br>
> > on GPU :)<br>
> ><br>
> > I’ll see if I can easily modify my test to use data from the NLPL repository<br>
> > (the data is certainly not the problem, but there might be some preprocessing<br>
> > steps for which scripts are not (yet) available).<br>
> ><br>
> > Regarding virtualenv on CSC, it’s hit or miss:<br>
> > - python-env/intelpython3.6-2018.3, which Martin mentioned lately and which<br>
> > contains PyTorch, doesn’t have virtualenv<br>
> > - python-env/3.5.3 has virtualenv, as you correctly observed<br>
> > - python-env/3.4.0, which is the default version on taito-shell, doesn’t have<br>
> > virtualenv<br>
> ><br>
> > I’ll have to test if it’s easier to build on the intelpython or the “normal” gnu<br>
> > one…<br>
> ><br>
> > Yves<br>
> ><br>
> >> On 26 Sep 2018, at 15:57, Stephan Oepen <oe@ifi.uio.no> wrote:<br>
> >><br>
> >> many thanks for validating (to some degree at least :-) my OpenNMT-py<br>
> >> installation on Abel. i have now added it to the software catalogue<br>
> >> and created minimal documentation on the NLPL wiki:<br>
> >><br>
> >> <a href="http://wiki.nlpl.eu/index.php/Infrastructure/software/catalogue">http://wiki.nlpl.eu/index.php/Infrastructure/software/catalogue</a><br>
> >> <a href="http://wiki.nlpl.eu/index.php/Translation/opennmt-py">http://wiki.nlpl.eu/index.php/Translation/opennmt-py</a><br>
> >><br>
> >> —could you suggest a minimal example workflow, demonstrating how to<br>
> >> train and decode with OpenNMT, ideally using files from our own<br>
> >> ‘/proj/nlpl/data/translation/’? speaking of which, should i start<br>
> >> replicating that directory from Taito to Abel, i.e. remove what you<br>
> >> had installed manually on Abel and instead turn on automated<br>
> >> replication once a day?<br>
> >><br>
> >> in principle, we should now produce a parallel installation of<br>
> >> OpenNMT-py on Taito, of course—which presupposes that we get something<br>
> >> parallel worked out for PyTorch.<br>
> >><br>
> >> yves, why do you say that CSC does not include ‘virtualenv’ in their<br>
> >> python installation? is there something principled that i am missing?<br>
> >><br>
> >> [oe@taito-login3 ~]$ module add python-env/3.5.3<br>
> >> Loading application python-3.5.3 environment with needed modules<br>
> >> Switching compiler gcc to gcc/5.4.0<br>
> >> Switching MPI version intelmpi to intelmpi/5.1.3<br>
> >><br>
> >> The following have been reloaded with a version change:<br>
> >> 1) gcc/4.8.2 => gcc/5.4.0 2) intelmpi/4.1.3 => intelmpi/5.1.3 3)<br>
> >> mkl/11.3.0 => mkl/11.3.2 4) python-env/3.4.0 => python-env/3.5.3 5)<br>
> >> python/3.4.0 => python/3.5.3<br>
> >><br>
> >> [oe@taito-login3 ~]$ type -all python<br>
> >> python is /appl/opt/python/3.5.3-gnu540/bin/python<br>
> >> [oe@taito-login3 ~]$ type -all virtualenv<br>
> >> virtualenv is /appl/opt/python/3.5.3-gnu540/bin/virtualenv<br>
> >><br>
> >> so, i am guessing we could presumably attempt an NLPL-maintained<br>
> >> installation of PyTorch into a 3.5 virtual environment, which would<br>
> >> likely require a custom glibc installation too (and the same kind of<br>
> >> dynamic linking ‘gymnastics’).<br>
> >><br>
> >> i feel i still need to learn more about the CSC environment. are the<br>
> >> modules available on taito-gpu the same as on the cpu nodes? in other<br>
> >> words, do both types of nodes see the same file system?<br>
> >><br>
> >> cheers, oe<br>
> >><br>
> >><br>
> >> On Wed, Sep 26, 2018 at 9:59 AM, Scherrer, Yves<br>
> >> <yves.scherrer@helsinki.fi> wrote:<br>
> >>> Hi,<br>
> >>><br>
> >>><br>
> >>><br>
> >>> I’ve had a quick look at Stephan’s OpenNMT-py on Abel. The onmt module seems<br>
> >>> to work, but one generally uses the scripts “preprocess.py”, “train.py” and<br>
> >>> “translate.py” (at the root directory of the Github repo), and these scripts<br>
> >>> seem to be missing from the module. Would it be possible to copy these three<br>
> >>> scripts (there is a fourth one, “server.py”, but this one might not be<br>
> >>> relevant for common usage) somewhere inside the virtual environment, so that<br>
> >>> they can be found and called easily?<br>
> >>><br>
> >>><br>
> >>><br>
> >>> I have to say that I find these stacked virtual environments quite elegant.<br>
> >>> Too bad that CSC doesn’t even include the virtualenv command in their<br>
> >>> python-env modules…<br>
> >>><br>
> >>><br>
> >>><br>
> >>> Best,<br>
> >>><br>
> >>> Yves<br>
> >>><br>
> >>><br>
> >>><br>
> >>> ________________________________<br>
> >>> From: Stephan Oepen <oe@ifi.uio.no><br>
> >>> Sent: Thursday, September 20, 2018 12:31:58 AM<br>
> >>> To: Scherrer, Yves<br>
> >>> Cc: Martin Matthiesen; infrastructure<br>
> >>><br>
> >>> Subject: Re: [NLPL Task Force (A)] OpenNMT installation for NLPL (on Abel)<br>
> >>><br>
> >>> dear all,<br>
> >>><br>
> >>> yes, chaining virtual environments appears to work as one would<br>
> >>> expect. i might in fact have managed to install OpenNMT-py on Abel,<br>
> >>> using my new PyTorch 0.4.1 virtual environment, essentially:<br>
> >>><br>
> >>> module load nlpl-pytorch<br>
> >>> /projects/nlpl/software/opennmt-py/<br>
> >>> virtualenv /projects/nlpl/software/opennmt-py/0.2.1<br>
> >>><br>
> >>> at this point, i had to manually change the ‘python’, ‘python3’, and<br>
> >>> ‘python3.5’ files in the new ‘bin/’ directory, to avail themselves of<br>
> >>> the custom glibc; see<br>
> >>> ‘http://wiki.nlpl.eu/index.php/Infrastructure/software/glibc’.<br>
> >>><br>
> >>> cd /projects/nlpl/software/modulefiles<br>
> >>> mkdir nlpl-opennmt-py<br>
> >>> cp nlpl-pytorch/0.4.1 nlpl-opennmt-py/0.2.1<br>
> >>> vi nlpl-opennmt-py/0.2.1<br>
> >>><br>
> >>> cd ~/src/nlpl<br>
> >>> module purge<br>
> >>> module load nlpl-opennmt-py<br>
> >>> wget <a href="https://github.com/OpenNMT/OpenNMT-py/archive/0.2.1.tar.gz">https://github.com/OpenNMT/OpenNMT-py/archive/0.2.1.tar.gz</a><br>
> >>> tar zpSxvf 0.2.1.tar.gz<br>
> >>> cd OpenNMT-py-0.2.1<br>
> >>> python setup.py install<br>
> >>><br>
> >>> so far, my testing is limited to<br>
> >>><br>
> >>> python -c "import torch; import onmt; print(onmt.__version__);"<br>
> >>><br>
> >>> yves, would you maybe have a chance next week to see whether this<br>
> >>> installation appears healthy to you?<br>
> >>><br>
> >>> cheers, oe<br>
> >>><br>
> >>><br>
> >>> On Wed, Sep 19, 2018 at 1:12 PM, Scherrer, Yves<br>
> >>> <yves.scherrer@helsinki.fi> wrote:<br>
> >>>> Hi Stephan, Martin,<br>
> >>>><br>
> >>>><br>
> >>>><br>
> >>>> I’m catching up on this thread… A few questions from my side:<br>
> >>>><br>
> >>>><br>
> >>>><br>
> >>>> Regarding Martin’s latest suggestion: that seems indeed to work fine,<br>
> >>>> although with the exact same commands I get a different version of<br>
> >>>> PyTorch:<br>
> >>>><br>
> >>>>>>> import torch<br>
> >>>><br>
> >>>>>>> torch.__file__<br>
> >>>><br>
> >>>><br>
> >>>> '/appl/opt/python/intelpython36-2018.3/intelpython3/lib/python3.6/site-packages/torch/__init__.py'<br>
> >>>><br>
> >>>>>>> torch.__version__<br>
> >>>><br>
> >>>> '0.4.0a0+3749c58'<br>
> >>>><br>
> >>>><br>
> >>>><br>
> >>>> In any case, if PyTorch is already installed in some Python distribution,<br>
> >>>> that would make setting up a specific OpenNMT module rather easy. If not,<br>
> >>>> virtual environments should work as well (the tricky thing is mainly to<br>
> >>>> figure out which python versions play well with CUDA…)<br>
> >>>><br>
> >>>><br>
> >>>><br>
> >>>> Regarding Stephan’s suggestion of virtual environments: do you know if<br>
> >>>> virtual environments can be “stacked”, i.e. whether I could create an<br>
> >>>> OpenNMT virtual environment that lies on top of your PyTorch environment?<br>
> >>>> Or<br>
> >>>> would I have to re-install another instance of PyTorch in the OpenNMT<br>
> >>>> virtualenv?<br>
> >>>><br>
> >>>><br>
> >>>><br>
> >>>> I’ll be travelling for the rest of the week, but will try to have a closer<br>
> >>>> look at these options next week.<br>
> >>>><br>
> >>>><br>
> >>>><br>
> >>>> Best,<br>
> >>>><br>
> >>>> Yves<br>
> >>>><br>
> >>>><br>
> >>>><br>
> >>>> ________________________________<br>
> >>>> From: Martin Matthiesen <martin.matthiesen@csc.fi><br>
> >>>> Sent: Wednesday, September 19, 2018 1:29:35 PM<br>
> >>>> To: Stephan Oepen<br>
> >>>> Cc: infrastructure; Scherrer, Yves<br>
> >>>><br>
> >>>> Subject: Re: [NLPL Task Force (A)] OpenNMT installation for NLPL (on Abel)<br>
> >>>><br>
> >>>> Hello Stephan,<br>
> >>>><br>
> >>>> ----- Original Message -----<br>
> >>>>> From: "Stephan Oepen" <oe@ifi.uio.no><br>
> >>>>> To: "Martin Matthiesen" <martin.matthiesen@csc.fi><br>
> >>>>> Cc: "infrastructure" <infrastructure@nlpl.eu>, "Yves Scherrer"<br>
> >>>>> <yves.scherrer@helsinki.fi><br>
> >>>>> Sent: Tuesday, 18 September, 2018 14:13:53<br>
> >>>>> Subject: Re: [NLPL Task Force (A)] OpenNMT installation for NLPL (on<br>
> >>>>> Abel)<br>
> >>>><br>
> >>>>> sorry, i was the one who had introduced the confusion about mailing<br>
> >>>>> lists. there is no ‘translation@nlpl.eu’ currently, and upon<br>
> >>>>> consultation with joerg there appears not to be a great need for it<br>
> >>>>> either (once i get around to documenting the task force structure on<br>
> >>>>> the project wiki, i might want to create that list nevertheless).<br>
> >>>>><br>
> >>>>> i am adding yves to thread now, so he at least has a chance of knowing<br>
> >>>>> what we are talking about :-).<br>
> >>>><br>
> >>>> Ok!<br>
> >>>>><br>
> >>>>> martin, i doubt that an installation of OpenNMT that requires everyone<br>
> >>>>> to ‘pip install --user’ into their home directory will be a good<br>
> >>>>> solution. that way, the getting started instructions will be more<br>
> >>>>> complex, and we lack control over which version of PyTorch gets<br>
> >>>>> installed at the time the user actually runs the command. my<br>
> >>>>> immediate reaction at least is that NLPL-supported software should be<br>
> >>>>> ‘self-contained’, in the sense of not depending on software components<br>
> >>>>> maintained by the user.<br>
> >>>><br>
> >>>> Ok, I understand.<br>
> >>>>><br>
> >>>>> what i am doing increasingly on abel is deriving virtual environments;<br>
> >>>>> e.g. my PyTorch installation (for NLPL) straightforwardly builds on<br>
> >>>>> the USIT-maintained python 3.5. i suppose we should be able to do the<br>
> >>>>> same thing on taito, i.e. create ‘nlpl-pytorch’ as a virtual<br>
> >>>>> environment that includes the precompiled PyTorch wheel from your CSC<br>
> >>>>> colleagues?<br>
> >>>><br>
> >>>> Yes, I guess that is the only sensible solution to not lose track<br>
> >>>> completely. In the meantime, how would this work for you all:<br>
> >>>><br>
> >>>> [GPU-Env ~]$ module load python-env/intelpython3.6-2018.3<br>
> >>>> Loading application Intel Distribution for Python 2018 update 3<br>
> >>>> [GPU-Env ~]$ module list<br>
> >>>><br>
> >>>> Currently Loaded Modules:<br>
> >>>> 1) gcc/4.9.3 2) cuda/7.5 3) StdEnv 4) git/2.17.1 5)<br>
> >>>> python-env/intelpython3.6-2018.3<br>
> >>>><br>
> >>>> [GPU-Env ~]$ python3<br>
> >>>> Python 3.6.3 |Intel Corporation| (default, May 4 2018, 04:22:28)<br>
> >>>> [GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux<br>
> >>>> Type "help", "copyright", "credits" or "license" for more information.<br>
> >>>> Intel(R) Distribution for Python is brought to you by Intel Corporation.<br>
> >>>> Please check out: <a href="https://software.intel.com/en-us/python-distribution">
https://software.intel.com/en-us/python-distribution</a><br>
> >>>>>>> import torch<br>
> >>>>>>> torch.__version__<br>
> >>>> '0.4.1'<br>
> >>>><br>
> >>>> Kudos to my colleagues Markus and Jarmo here.<br>
> >>>><br>
> >>>> Martin<br>
> >>>><br>
> >>>>><br>
> >>>>> oe<br>
> >>>>><br>
> >>>>><br>
> >>>>><br>
> >>>>><br>
> >>>>> On Mon, Sep 17, 2018 at 5:06 PM, Martin Matthiesen<br>
> >>>>> <martin.matthiesen@csc.fi> wrote:<br>
> >>>>>> Hello,<br>
> >>>>>><br>
> >>>>>> We already have a way to use pytorch 0.4.1 on Taito-GPU:<br>
> >>>>>><br>
> >>>>>> module load python-env/intelpython3.6-2018.3<br>
> >>>>>> [GPU-Env ~]$ pip install -v --user<br>
> >>>>>> /appl/opt/pytorch/0.4.1/cu90/torch-0.4.1-cp36-cp36m-linux_x86_64.whl<br>
> >>>>>><br>
> >>>>>> One of my colleagues has compiled the module. Note that the module needs<br>
> >>>>>> python<br>
> >>>>>> 3.6 to work, the highest available on Taito-GPU.<br>
> >>>>>><br>
> >>>>>> Before I investigate CPU-support or support for other compilers, would<br>
> >>>>>> this<br>
> >>>>>> pip-approach work for you?<br>
> >>>>>><br>
> >>>>>> Regards,<br>
> >>>>>> Martin<br>
> >>>>>><br>
> >>>>>> ----- Original Message -----<br>
> >>>>>>> From: "Stephan Oepen" <oe@ifi.uio.no><br>
> >>>>>>> To: translation@nlpl.eu<br>
> >>>>>>> Cc: "infrastructure" <infrastructure@nlpl.eu><br>
> >>>>>>> Sent: Saturday, 15 September, 2018 18:59:29<br>
> >>>>>>> Subject: [NLPL Task Force (A)] OpenNMT installation for NLPL (on Abel)<br>
> >>>>>><br>
> >>>>>>> colleagues,<br>
> >>>>>>><br>
> >>>>>>> joerg, martin, and i talked about getting the new release version of<br>
> >>>>>>> OpenNMT installed for NLPL. it appears it requires the most recent<br>
> >>>>>>> version of PyTorch, which currently is not available on Taito. martin<br>
> >>>>>>> will ask for it to be installed by CSC.<br>
> >>>>>>><br>
> >>>>>>> in parallel, i believe i managed to put an NLPL-owned installation of<br>
> >>>>>>> the right PyTorch version onto Abel, please see:<br>
> >>>>>>><br>
> >>>>>>> <a href="http://wiki.nlpl.eu/index.php/Infrastructure/software/pytorch">
http://wiki.nlpl.eu/index.php/Infrastructure/software/pytorch</a><br>
> >>>>>>><br>
> >>>>>>> before announcing this more widely, i would be grateful for some<br>
> >>>>>>> testing, in particular for both cpu and gpu usage. would anyone we<br>
> >>>>>>> readily set up to give this a shot on Abel?<br>
> >>>>>>><br>
> >>>>>>> assuming our PyTorch is healthy, would someone from the helsinki team<br>
> >>>>>>> have the time to try and install OpenNMT onto Abel, e.g. as<br>
> >>>>>>><br>
> >>>>>>> /projects/nlpl/software/opennmt-py/0.2.1<br>
> >>>>>>><br>
> >>>>>>> there have been two relatively recent requests for OpenNMT in oslo<br>
> >>>>>>> (one of them for seq2seq dependency parsing :-), so i believe it would<br>
> >>>>>>> now be warranted to provide it on both systems.<br>
> >>>>>>><br>
> > >>>>>> best wishes, oe<br>
</div>
</span></font></div>
</body>
</html>