Posted on 
Aug 25, 2025

Data Scientist (Voice AI)

Remote
Mid-Senior ICs
Deepgram
Deepgram
Deepgram
Series B
51-100
Software, Security & Developer Tools

We’re a foundational AI company whose mission is to make every voice heard and understood. Our end-to-end deep neural network is redefining what companies can do with voice by offering a platform with AI architectural advantage, not legacy tech retrofitted with AI. We believe corporations who harness the power of their audio data will improve the way we all live and work. Together, we’ll unlock unique languages, accents, and dialects all over the world for better communication and better experiences.

Job Description

Company Overview

====================

Deepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than Deepgram

Opportunity:

Deepgram is looking for a Data Scientist (Voice AI) to take ownership of how we benchmark and evaluate the performance of our voice AI models. This role is pivotal to the integrity and impact of our AI offerings. You’ll be building robust benchmarking pipelines, producing clear and actionable model cards, and partnering cross-functionally with research, product, QA, marketing, and data labeling to shape how our models are measured, released, and improved. If you love designing evaluations that matter, aligning metrics with product goals, and translating data into insight, this role is for you.

What You’ll Do

  • Build and maintain scalable benchmarking pipelines for model evaluations across STT, TTS, and voice agent use cases.
  • Run regular evaluations of production and pre-release models on curated, real-world datasets.
  • Partner with Research, Data, and Engineering teams to develop new evaluation methodologies and integrate them into our development cycle.
  • Design, define and refine evaluation metrics that reflect product experience, quality, and performance goals.
  • Author comprehensive model cards and internal reports outlining model strengths, weaknesses, and recommended use cases.
  • Work closely with Data Labeling Ops to source, annotate, and prepare evaluation datasets.
  • Collaborate with QA Engineers to integrate model tests into CI/CD and release workflows.
  • Support Marketing and Product with credible, data-backed comparisons to competitors.
  • Track market developments and maintain awareness of competitive benchmarks.
  • Support GTM teams with benchmarking best practices for prospects and customers.

You’ll Love This Role If You

  • Enjoy translating model outputs into human insights that guide product strategy.
  • Are motivated by precision, fairness, and transparency in evaluation.
  • Have a data-minded approach to experimentation and thrive on uncovering what’s working—and what’s not.
  • Take pride in designing clean, repeatable benchmarks that bring clarity to complex systems.
  • Get satisfaction from cross-functional collaboration, working with researchers, product teams, and engineers alike.
  • Want to shape how we define quality and success in speech AI.
  • Are excited by the idea of being a key voice in when—and how—we release new models into the world.

It’s Important To Us That You Have

  • Experience designing, executing, and iterating on evaluation pipelines for ML models
  • Proficiency in Python and data analysis libraries
  • Ability to develop automated evaluation systems—whether scripting analysis workflows or integrating with broader ML pipelines.
  • Comfort working with large-scale datasets and crafting meaningful performance metrics and visualizations.
  • Experience using LLMs or internal tooling to accelerate analysis, QA, or pipeline prototyping.
  • Strong communication skills—especially when translating raw data into structured insights, documentation, or dashboards.
  • Proven success working cross-functionally with research, engineering, QA, and product teams.

It Would Be Great if You Had

  • Prior experience evaluating speech-related models, especially STT or TTS systems.
  • Familiarity with model documentation formats (e.g., model cards, eval reports, dashboards).
  • Understanding of competitive benchmarking and landscape analysis for voice AI products.
  • Experience contributing to or owning internal evaluation infrastructure—whether integrating with existing systems or proposing new ones.
  • A background in startup environments, applied research, or AI product deployment.

Don't have Voice AI experience?

You can still be a great fit if you are:

  1. A detail-oriented perfectionist who takes pride in building precise, fair evaluations guided by products and is driven by accuracy, automation, and transparency in pipeline.
  2. A collaborative systems thinker who excels at translating complex model outputs into actionable insights and building scalable processes that connect technical evaluation to business impact.
  3. A curious, self-directed problem solver who thrives as a quality gatekeeper in ambiguous environments, enjoys cross-functional collaboration, and is energized by uncovering what works and what doesn't.

Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!

Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.

We are happy to provide accommodations for applicants who need them.

Company Overview

Deepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than Deepgram

Opportunity:

Deepgram is looking for a Data Scientist (Voice AI) to take ownership of how we benchmark and evaluate the performance of our voice AI models. This role is pivotal to the integrity and impact of our AI offerings. You’ll be building robust benchmarking pipelines, producing clear and actionable model cards, and partnering cross-functionally with research, product, QA, marketing, and data labeling to shape how our models are measured, released, and improved. If you love designing evaluations that matter, aligning metrics with product goals, and translating data into insight, this role is for you.

What You’ll Do

  • Build and maintain scalable benchmarking pipelines for model evaluations across STT, TTS, and voice agent use cases.

  • Run regular evaluations of production and pre-release models on curated, real-world datasets.

  • Partner with Research, Data, and Engineering teams to develop new evaluation methodologies and integrate them into our development cycle.

  • Design, define and refine evaluation metrics that reflect product experience, quality, and performance goals.

  • Author comprehensive model cards and internal reports outlining model strengths, weaknesses, and recommended use cases.

  • Work closely with Data Labeling Ops to source, annotate, and prepare evaluation datasets.

  • Collaborate with QA Engineers to integrate model tests into CI/CD and release workflows.

  • Support Marketing and Product with credible, data-backed comparisons to competitors.

  • Track market developments and maintain awareness of competitive benchmarks.

  • Support GTM teams with benchmarking best practices for prospects and customers.

You’ll Love This Role If You

  • Enjoy translating model outputs into human insights that guide product strategy.

  • Are motivated by precision, fairness, and transparency in evaluation.

  • Have a data-minded approach to experimentation and thrive on uncovering what’s working—and what’s not.

  • Take pride in designing clean, repeatable benchmarks that bring clarity to complex systems.

  • Get satisfaction from cross-functional collaboration, working with researchers, product teams, and engineers alike.

  • Want to shape how we define quality and success in speech AI.

  • Are excited by the idea of being a key voice in when—and how—we release new models into the world.

It’s Important To Us That You Have

  • Experience designing, executing, and iterating on evaluation pipelines for ML models

  • Proficiency in Python and data analysis libraries

  • Ability to develop automated evaluation systems—whether scripting analysis workflows or integrating with broader ML pipelines.

  • Comfort working with large-scale datasets and crafting meaningful performance metrics and visualizations.

  • Experience using LLMs or internal tooling to accelerate analysis, QA, or pipeline prototyping.

  • Strong communication skills—especially when translating raw data into structured insights, documentation, or dashboards.

  • Proven success working cross-functionally with research, engineering, QA, and product teams.

It Would Be Great if You Had

  • Prior experience evaluating speech-related models, especially STT or TTS systems.

  • Familiarity with model documentation formats (e.g., model cards, eval reports, dashboards).

  • Understanding of competitive benchmarking and landscape analysis for voice AI products.

  • Experience contributing to or owning internal evaluation infrastructure—whether integrating with existing systems or proposing new ones.

  • A background in startup environments, applied research, or AI product deployment.

Don't have Voice AI experience?

You can still be a great fit if you are:

  1. A detail-oriented perfectionist who takes pride in building precise, fair evaluations guided by products and is driven by accuracy, automation, and transparency in pipeline.

  2. A collaborative systems thinker who excels at translating complex model outputs into actionable insights and building scalable processes that connect technical evaluation to business impact.

  3. A curious, self-directed problem solver who thrives as a quality gatekeeper in ambiguous environments, enjoys cross-functional collaboration, and is energized by uncovering what works and what doesn't.

Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!

Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.

We are happy to provide accommodations for applicants who need them.

Why apply via Tech Ladies
Are you looking to hire Tech Ladies?
Post a Job
Receive Tech Ladies'
newest jobs in your inbox,
every week.

Join Tech Ladies for full-access to the job board, member-only events, and more!

If you're already a member, we haven't forgotten you. We promise. It's a new system. If you fill out the form once, it'll remember you going forward. Apologies for the inconvenience.

Remote
Remote
Python
Python
TypeScript
TypeScript
React
React
No items found.
Remote
Remote