Software Engineer, Inference - Multi Modal
Company: OpenAI
Location: San Francisco
Posted on: June 1, 2025
Job Description:
Software Engineer, Inference - Multi Modal -
OpenAICareersSoftware Engineer, Inference - Multi ModalInference -
San FranciscoApply now (opens in a new window)About the
TeamOpenAI's Inference team powers the deployment of our most
advanced models - including our GPT models, 4o Image Generation,
and Whisper - across a variety of platforms. Our work ensures these
models are available, performant, and scalable in production, and
we partner closely with Research to bring the next generation of
models into the world. We're a small, fast-moving team of engineers
focused on delivering a world-class developer experience while
pushing the boundaries of what AI can do.We're expanding into
multimodal inference, building the infrastructure needed to serve
models that handle image, audio, and other non-text modalities.
These workloads are inherently more heterogeneous and experimental,
involving diverse model sizes and interactions, more complex
input/output formats, and tighter coordination with product and
research.About the RoleWe're looking for a software engineer to
help us serve OpenAI's multimodal models at scale. You'll be part
of a small team responsible for building reliable, high-performance
infrastructure for serving real-time audio, image, and other MM
workloads in production.This work is inherently cross-functional:
you'll collaborate directly with researchers training these models
and with product teams defining new modalities of interaction.
You'll build and optimize the systems that let users generate
speech, understand images, and interact with models in ways far
beyond text.In this role, you will:
- Design and implement inference infrastructure for large-scale
multimodal models.
- Optimize systems for high-throughput, low-latency delivery of
image and audio inputs and outputs.
- Enable experimental research workflows to transition into
reliable production services.
- Collaborate closely with researchers, infra teams, and product
engineers to deploy state-of-the-art capabilities.
- Contribute to system-level improvements including GPU
utilization, tensor parallelism, and hardware abstraction
layers.
You might thrive in this role if you:
- Have experience building and scaling inference systems for LLMs
or multimodal models.
- Have worked with GPU-based ML workloads and understand the
performance dynamics of large models, especially with complex data
like images or audio.
- Enjoy experimental, fast-evolving work and collaborating
closely with research.
- Are comfortable dealing with systems that span networking,
distributed compute, and high-throughput data handling.
- Have familiarity with inference tooling like vLLM,
TensorRT-LLM, or custom model parallel systems.
- Own problems end-to-end and are excited to operate in
ambiguous, fast-moving spaces.
Nice to Have:
- Experience working with image generation or audio synthesis
models in production.
- Exposure to distributed ML training or system-efficient model
design.About OpenAIOpenAI is an AI research and deployment company
dedicated to ensuring that general-purpose artificial intelligence
benefits all of humanity. We push the boundaries of the
capabilities of AI systems and seek to safely deploy them to the
world through our products. AI is an extremely powerful tool that
must be created with safety and human needs at its core, and to
achieve our mission, we must encompass and value the many different
perspectives, voices, and experiences that form the full spectrum
of humanity.We are an equal opportunity employer and do not
discriminate on the basis of race, religion, national origin,
gender, sexual orientation, age, veteran status, disability or any
other legally protected status.OpenAI Affirmative Action and Equal
Employment Opportunity Policy StatementFor US Based Candidates:
Pursuant to the San Francisco Fair Chance Ordinance, we will
consider qualified applicants with arrest and conviction records.We
are committed to providing reasonable accommodations to applicants
with disabilities, and requests can be made via thislink .OpenAI
Global Applicant Privacy PolicyAt OpenAI, we believe artificial
intelligence has the potential to help people solve immense global
challenges, and we want the upside of AI to be widely shared. Join
us in shaping the future of technology.Compensation$310K - $460K +
Offers EquityApply now (opens in a new window)
#J-18808-Ljbffr
Keywords: OpenAI, Berkeley , Software Engineer, Inference - Multi Modal, IT / Software / Systems , San Francisco, California
Didn't find what you're looking for? Search again!
Loading more jobs...