Michael Saxon

I am excited about building better language technologies for social good. I like fusing the old with the new, leveraging rich representations learned from big data to model semantics and analyze scientific communications.


PhD student, NLP Lab, Computer Science, UC Santa Barbara (2020–)
Advised by William Yang Wang
Recipient, NSF Graduate Research Fellowship (2020)


Intern, Meta AI (Cognitive AI/Conversational AI Research) (2022)
Intern, Amazon Alexa Web-based QA (2021)
Intern, Amazon Alexa Hybrid Science (2019, 2020)

MS Computer Engineering, Arizona State University (2018–2020)
Advised by Visar Berisha & Sethuraman Panchanathan

BSE Electrical Engineering, Arizona State University (2014–2018)


11/18/2022 The 2022 Southern California NLP Symposium (SoCalNLP22) was a massive success! Co-chairing the program committee and participating in event organization was a great privilege and it was wonderful meeting everybody. Please check out our full event livestream [YouTube Link] and some event photos [Twitter:@m2saxon] [Twitter:@ucsbNLP]!

10/24/2022 Our general-purpose text-reference comparison metric that simulates human preferences for translation and summarization, SEScore, is now available on HuggingFace spaces! Preprint here: [arXiv:2210.05035]

10/12/2022 Our work "Not All Errors are Equal: Learning Text Generation Metrics using Stratified Error Synthesis" will appear in Findings of EMNLP 2022! Preprint here: [arXiv:2210.05035]

10/12/2022 Check out the latest preprint of my work "PECO: Examining Single Sentence Label Leakage in Natural Language Inference Datasets through Progressive Evaluation of Cluster Outliers" on arXiv! We demonstrated automated detection of spurious, annotator-driven correlations that lead to cheating features in NLI. Preprint here: [arXiv:2112.09237]

6/6/2022 Excited to start my 2022 AI Research Scientist Internship at Meta in Menlo Park!

12/3/2021 Our work "Self-Supervised Knowledge Assimilation for Expert-Layman Text Style Transfer" will appear at AAAI 2022! Preprint here: [arXiv:2110.02950]

11/8/2021 Had a great time presenting our Disclosive Transparency work at EMNLP 2021! Our work was even highlighted in an EMNLP overview article! Oral presentation prerecording: [YouTube]

10/1/2021 Our work "Counterfactual Maximum Likelihood Estimation for Training Deep Networks" will appear at NeurIPS 2021! Preprint here: [arXiv:2106.03831]

9/23/2021 Our work "Modeling Disclosive Transparency in NLP Application Descriptions" will appear at EMNLP 2021 as an oral presentation! Preprint here: [arXiv:2101.00433]

9/13/2021 I was profiled on the Amazon Science Blog about my experience doing multiple applied science internships with the company! Article here: [amazon.science]