Most methods in NLP and linguistics are geared towards dealing with text, not talk. Our research aims to change this by demonstrating the importance of linguistically diverse conversational data (audio+annotations). Our central question is: how can we use computational tools to make the language sciences conversation-ready? This is the next frontier for enabling quantitative approaches to conversational structure and for creating diversity-aware language technology. To get there, we combine methods from comparative linguistics, computational modelling and data science. Our key purposes are to enable broad curation, rapid exploration and rich visualization, as showcased in our 2022 ACL, LREC and Interspeech papers.