Per-Task Model Selection

Configure different OpenAI models for different kinds of tasks. Use GPT-5.4 for long-form reasoning, o3-mini for quick replies, and a fine-tuned model for classification — all from WhatsApp.

WhatsApp DMmodelsroutingconfig

Conversation example

Route fast replies to o3-mini, long reasoning to gpt-5.4, and classification to the fine-tuned classifier

file_write → config/models.json

Updating model routing config...

Model routing updated Reasoning (long): gpt-5.4 Fast reply (under 300 tokens): o3-mini Classification: ft:o3-mini:lunogen:classifier Embeddings: text-embedding-3-large Image generation: dall-e-3 Estimated cost impact: about -38% vs using gpt-5.4 for everything, based on last week's traffic mix. I'll rerun the benchmarks after 24 hours and report back.

More OpenAI use cases

Deploy this in minutes

Create a LunoGen agent, connect OpenAI, and start running this workflow from WhatsApp today.

Back to OpenAI