<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
<meta name="Generator" content="Microsoft Exchange Server">
<!-- converted from text --><style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
</head>
<body>
<meta content="text/html; charset=utf-8">
<meta name="x_Generator" content="Microsoft Word 15 (filtered medium)">
<style>
<!--
@font-face
{font-family:"Cambria Math"}
@font-face
{font-family:Calibri}
p.x_MsoNormal, li.x_MsoNormal, div.x_MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif}
a:x_link, span.x_MsoHyperlink
{color:blue;
text-decoration:underline}
a:x_visited, span.x_MsoHyperlinkFollowed
{color:#954F72;
text-decoration:underline}
.x_MsoChpDefault
{}
@page WordSection1
{margin:70.85pt 56.7pt 70.85pt 56.7pt}
div.x_WordSection1
{}
-->
</style>
<div lang="EN-US" link="blue" vlink="#954F72">
<div class="x_WordSection1">
<p class="x_MsoNormal">Hi Stephan, Martin,</p>
<p class="x_MsoNormal"> </p>
<p class="x_MsoNormal">I’m catching up on this thread… A few questions from my side:</p>
<p class="x_MsoNormal"> </p>
<p class="x_MsoNormal">Regarding Martin’s latest suggestion: that seems indeed to work fine, although with the exact same commands I get a different version of PyTorch:</p>
<p class="x_MsoNormal">>>> import torch</p>
<p class="x_MsoNormal">>>> torch.__file__</p>
<p class="x_MsoNormal">'/appl/opt/python/intelpython36-2018.3/intelpython3/lib/python3.6/site-packages/torch/__init__.py'</p>
<p class="x_MsoNormal">>>> torch.__version__</p>
<p class="x_MsoNormal">'0.4.0a0+3749c58'</p>
<p class="x_MsoNormal"> </p>
<p class="x_MsoNormal">In any case, if PyTorch is already installed in some Python distribution, that would make setting up a specific OpenNMT module rather easy. If not, virtual environments should work as well (the tricky thing is mainly to figure out which
python versions play well with CUDA…)</p>
<p class="x_MsoNormal"> </p>
<p class="x_MsoNormal">Regarding Stephan’s suggestion of virtual environments: do you know if virtual environments can be “stacked”, i.e. whether I could create an OpenNMT virtual environment that lies on top of your PyTorch environment? Or would I have to
re-install another instance of PyTorch in the OpenNMT virtualenv?</p>
<p class="x_MsoNormal"> </p>
<p class="x_MsoNormal">I’ll be travelling for the rest of the week, but will try to have a closer look at these options next week.</p>
<p class="x_MsoNormal"> </p>
<p class="x_MsoNormal">Best,</p>
<p class="x_MsoNormal">Yves</p>
<p class="x_MsoNormal"> </p>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="x_divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> Martin Matthiesen <martin.matthiesen@csc.fi><br>
<b>Sent:</b> Wednesday, September 19, 2018 1:29:35 PM<br>
<b>To:</b> Stephan Oepen<br>
<b>Cc:</b> infrastructure; Scherrer, Yves<br>
<b>Subject:</b> Re: [NLPL Task Force (A)] OpenNMT installation for NLPL (on Abel)</font>
<div> </div>
</div>
</div>
<font size="2"><span style="font-size:11pt;">
<div class="PlainText">Hello Stephan,<br>
<br>
----- Original Message -----<br>
> From: "Stephan Oepen" <oe@ifi.uio.no><br>
> To: "Martin Matthiesen" <martin.matthiesen@csc.fi><br>
> Cc: "infrastructure" <infrastructure@nlpl.eu>, "Yves Scherrer" <yves.scherrer@helsinki.fi><br>
> Sent: Tuesday, 18 September, 2018 14:13:53<br>
> Subject: Re: [NLPL Task Force (A)] OpenNMT installation for NLPL (on Abel)<br>
<br>
> sorry, i was the one who had introduced the confusion about mailing<br>
> lists. there is no ‘translation@nlpl.eu’ currently, and upon<br>
> consultation with joerg there appears not to be a great need for it<br>
> either (once i get around to documenting the task force structure on<br>
> the project wiki, i might want to create that list nevertheless).<br>
> <br>
> i am adding yves to thread now, so he at least has a chance of knowing<br>
> what we are talking about :-).<br>
<br>
Ok!<br>
> <br>
> martin, i doubt that an installation of OpenNMT that requires everyone<br>
> to ‘pip install --user’ into their home directory will be a good<br>
> solution. that way, the getting started instructions will be more<br>
> complex, and we lack control over which version of PyTorch gets<br>
> installed at the time the user actually runs the command. my<br>
> immediate reaction at least is that NLPL-supported software should be<br>
> ‘self-contained’, in the sense of not depending on software components<br>
> maintained by the user.<br>
<br>
Ok, I understand. <br>
> <br>
> what i am doing increasingly on abel is deriving virtual environments;<br>
> e.g. my PyTorch installation (for NLPL) straightforwardly builds on<br>
> the USIT-maintained python 3.5. i suppose we should be able to do the<br>
> same thing on taito, i.e. create ‘nlpl-pytorch’ as a virtual<br>
> environment that includes the precompiled PyTorch wheel from your CSC<br>
> colleagues?<br>
<br>
Yes, I guess that is the only sensible solution to not lose track completely. In the meantime, how would this work for you all:<br>
<br>
[GPU-Env ~]$ module load python-env/intelpython3.6-2018.3<br>
Loading application Intel Distribution for Python 2018 update 3 <br>
[GPU-Env ~]$ module list<br>
<br>
Currently Loaded Modules:<br>
1) gcc/4.9.3 2) cuda/7.5 3) StdEnv 4) git/2.17.1 5) python-env/intelpython3.6-2018.3<br>
<br>
[GPU-Env ~]$ python3<br>
Python 3.6.3 |Intel Corporation| (default, May 4 2018, 04:22:28) <br>
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux<br>
Type "help", "copyright", "credits" or "license" for more information.<br>
Intel(R) Distribution for Python is brought to you by Intel Corporation.<br>
Please check out: <a href="https://software.intel.com/en-us/python-distribution">
https://software.intel.com/en-us/python-distribution</a><br>
>>> import torch<br>
>>> torch.__version__<br>
'0.4.1'<br>
<br>
Kudos to my colleagues Markus and Jarmo here.<br>
<br>
Martin<br>
<br>
> <br>
> oe<br>
> <br>
> <br>
> <br>
> <br>
> On Mon, Sep 17, 2018 at 5:06 PM, Martin Matthiesen<br>
> <martin.matthiesen@csc.fi> wrote:<br>
>> Hello,<br>
>><br>
>> We already have a way to use pytorch 0.4.1 on Taito-GPU:<br>
>><br>
>> module load python-env/intelpython3.6-2018.3<br>
>> [GPU-Env ~]$ pip install -v --user<br>
>> /appl/opt/pytorch/0.4.1/cu90/torch-0.4.1-cp36-cp36m-linux_x86_64.whl<br>
>><br>
>> One of my colleagues has compiled the module. Note that the module needs python<br>
>> 3.6 to work, the highest available on Taito-GPU.<br>
>><br>
>> Before I investigate CPU-support or support for other compilers, would this<br>
>> pip-approach work for you?<br>
>><br>
>> Regards,<br>
>> Martin<br>
>><br>
>> ----- Original Message -----<br>
>>> From: "Stephan Oepen" <oe@ifi.uio.no><br>
>>> To: translation@nlpl.eu<br>
>>> Cc: "infrastructure" <infrastructure@nlpl.eu><br>
>>> Sent: Saturday, 15 September, 2018 18:59:29<br>
>>> Subject: [NLPL Task Force (A)] OpenNMT installation for NLPL (on Abel)<br>
>><br>
>>> colleagues,<br>
>>><br>
>>> joerg, martin, and i talked about getting the new release version of<br>
>>> OpenNMT installed for NLPL. it appears it requires the most recent<br>
>>> version of PyTorch, which currently is not available on Taito. martin<br>
>>> will ask for it to be installed by CSC.<br>
>>><br>
>>> in parallel, i believe i managed to put an NLPL-owned installation of<br>
>>> the right PyTorch version onto Abel, please see:<br>
>>><br>
>>> <a href="http://wiki.nlpl.eu/index.php/Infrastructure/software/pytorch">http://wiki.nlpl.eu/index.php/Infrastructure/software/pytorch</a><br>
>>><br>
>>> before announcing this more widely, i would be grateful for some<br>
>>> testing, in particular for both cpu and gpu usage. would anyone we<br>
>>> readily set up to give this a shot on Abel?<br>
>>><br>
>>> assuming our PyTorch is healthy, would someone from the helsinki team<br>
>>> have the time to try and install OpenNMT onto Abel, e.g. as<br>
>>><br>
>>> /projects/nlpl/software/opennmt-py/0.2.1<br>
>>><br>
>>> there have been two relatively recent requests for OpenNMT in oslo<br>
>>> (one of them for seq2seq dependency parsing :-), so i believe it would<br>
>>> now be warranted to provide it on both systems.<br>
>>><br>
> >> best wishes, oe<br>
</div>
</span></font>
</body>
</html>