Myle Ott

I’m an AI research engineer in NYC and part of the founding team at Character AI. Previously I led large language model efforts at Meta AI (FAIR), including Fully Sharded Data Parallel (FSDP), fairseq, OPT 175B and RoBERTa.

Contact:
e-mail
GitHub
Google Scholar
LinkedIn
Twitter

Publications

2022
OPT: Open Pre-trained Transformer Language Models Meta AI (OPT Team)
Efficient Large Scale Language Modeling with Mixtures of Experts Meta AI (XLM-G Team) EMNLP 2022
Few-shot Learning with Multilingual Language Models Meta AI (XLM-G Team) EMNLP 2022
2021
NormFormer: Improved Transformer Pretraining with Extra Normalization Sam Shleifer, Jason Weston, Myle Ott
Fully Sharded Data Parallel: faster AI training with fewer GPUs Myle Ott, Sam Shleifer, Min Xu, Priya Goyal, Quentin Duval, Vittorio Caggiano
Larger-Scale Transformers for Multilingual Masked Language Modeling Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau ACL RepL4NLP Workshop 2021
Recipes for Building an Open-Domain Chatbot Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston EACL 2021
Analyzing the Forgetting Problem in Pretrain-Finetuning of Open-domain Dialogue Response Models Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, Fuchun Peng EACL 2021
Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, Rob Fergus PNAS
Residual Energy-Based Models for Text Anton Bakhtin, Yuntian Deng, Sam Gross, Myle Ott, Marc'Aurelio Ranzato, Arthur Szlam JMLR
2020
Few-shot Sequence Learning with Transformers Lajanugen Logeswaran, Ann Lee, Myle Ott, Honglak Lee, Marc'Aurelio Ranzato, Arthur Szlam NeurIPS Meta-Learning Workshop 2020
Pretrained Language Models for Biomedical and Clinical Tasks: Understanding and Extending the State-of-the-Art Patrick Lewis, Myle Ott, Jingfei Du, Veselin Stoyanov EMNLP ClinicalNLP Workshop 2020
General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference Jingfei Du, Myle Ott, Haoran Li, Xing Zhou, Veselin Stoyanov EMNLP Findings 2020
Unsupervised Cross-lingual Representation Learning at Scale Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov ACL 2020
On The Evaluation of Machine Translation Systems Trained With Back-Translation Sergey Edunov, Myle Ott, Marc'Aurelio Ranzato, Michael Auli ACL 2020
Residual Energy-Based Models for Text Generation Yuntian Deng, Anton Bakhtin, Myle Ott, Arthur Szlam, Marc'Aurelio Ranzato ICLR 2020
2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov
fairseq: A Fast, Extensible Toolkit for Sequence Modeling Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli NAACL 2019 demo
The FLoRes Evaluation Datasets for Low-Resource Machine Translation: Nepali-English and Sinhala-English Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, Marc'Aurelio Ranzato EMNLP 2019 (Best Resource Paper award)
Mixture Models for Diverse Machine Translation: Tricks of the Trade Tianxiao Shen, Myle Ott, Michael Auli, Marc'Aurelio Ranzato ICML 2019
Facebook AI's WAT19 Myanmar-English Translation Task Submission Peng-Jen Chen, Jiajun Shen, Matt Le, Vishrav Chaudhary, Ahmed El-Kishky, Guillaume Wenzek, Myle Ott, Marc'Aurelio Ranzato WAT 2019 (Winning submission to Myanmar-English translation task)
Facebook FAIR's WMT19 News Translation Task Submission Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov WMT 2019 (Winning submission to De-En and Ru-En translation tasks)
2018
Phrase-Based & Neural Unsupervised Machine Translation Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, Marc'Aurelio Ranzato EMNLP 2018 (Best Long Paper award)
Understanding Back-Translation at Scale Sergey Edunov, Myle Ott, Michael Auli, David Grangier EMNLP 2018 (Winning submission to WMT'18 English-to-German translation task)
Scaling Neural Machine Translation Myle Ott, Sergey Edunov, David Grangier, Michael Auli WMT 2018
Analyzing Uncertainty in Neural Machine Translation Myle Ott, Michael Auli, David Grangier, Marc'Aurelio Ranzato ICML 2018
Classical Structured Prediction Losses for Sequence to Sequence Learning Sergey Edunov, Myle Ott, Michael Auli, David Grangier, Marc'Aurelio Ranzato NAACL 2018
Earlier Work
Towards a General Rule for Identifying Deceptive Opinion Spam Jiwei Li, Myle Ott, Claire Cardie, Eduard Hovy ACL 2014
Impact of Mobility and Timing on User-Generated Content Gabriele Piccoli, Myle Ott MIS Quarterly 2014
Identifying Manipulated Offerings on Review Portals Jiwei Li, Myle Ott, Claire Cardie EMNLP 2013
Properties, Prediction, and Prevalence of Useful User-Generated Comments for Descriptive Annotation of Social Media Objects Elaheh Momeni, Claire Cardie, Myle Ott ICWSM 2013
Negative Deceptive Opinion Spam Myle Ott, Claire Cardie, Jeff Hancock NAACL 2013
Estimating the Prevalence of Deception in Online Review Communities Myle Ott, Claire Cardie, Jeff Hancock WWW 2012
IBM at TREC 2012: Microblog Track Myle Ott, Vittorio Castelli, Hema Raghavan, Radu Florian TREC 2012: Microblog Track
In Search of a Gold Standard in Studies of Deception Stephanie Gokhman, Jeff Hancock, Poornima Prabhu, Myle Ott, Claire Cardie EACL 2012, Workshop on Deception
Multi-aspect Sentiment Analysis with Topic Models Bin Lu, Myle Ott, Claire Cardie, Benjamin Tsou ICDM 2011, Workshop on Sentiment
Finding Deceptive Opinion Spam by Any Stretch of the Imagination Myle Ott, Yejin Choi, Claire Cardie, Jeff Hancock ACL 2011 (Test of Time award in 2021)

Theme by orderedlist