
Ask anyone who uses their phone hands-free what the most frustrating thing about Siri is, and you’ll hear a version of the same answer: you have to treat it like a vending machine. One request, one result, repeat. Want to set a timer, send a message, and queue up a playlist before leaving the house? That’s three separate interactions, three moments of waiting, three chances for Siri to mishear you. Apple is reportedly fixing exactly this in iOS 27 and if it works as described, it’s the most meaningfully useful Siri upgrade in years.
According to Bloomberg reporting via MacRumors, Apple is testing the ability for Siri to handle multiple requests within a single command. Instead of chaining three separate voice prompts back to back, you’d issue one combined instruction and Siri would parse it, break it into tasks, and execute them sequentially. Set a timer, send a message, play music one breath, one ask.
It sounds straightforward, but the underlying complexity is significant. Siri needs to understand not just individual instructions but the relationship between them, the intended order, and the context of each all in real time. That’s a fundamentally different architecture than the current system, which processes one command and stops.
This feature doesn’t exist in isolation. It’s one piece of arguably the most ambitious Siri overhaul Apple has ever attempted, rolling out across iOS 27, iPadOS 27, and macOS 27 simultaneously. The broader rebuild includes better understanding of pronouns and on-screen content, short-term memory for follow-up requests, deeper in-app context awareness, and system-wide “Ask Siri” buttons integrated throughout the OS.
Perhaps most significantly, Apple is reportedly building a standalone Siri app with a dedicated interface that stores conversation history and supports both voice and text input, functioning much like ChatGPT or Gemini’s own apps. That’s a philosophical shift. Siri stops being a floating overlay and becomes a destination.
On top of that, a new extensions system would let third-party AI tools plug directly into Siri ChatGPT, Google Gemini, and Anthropic’s Claude are all mentioned as potential integrations. Rather than competing with every AI assistant, Apple is building a platform that orchestrates them. It’s the same “host everything” logic behind the broader Apple AI strategy announced this week.
Google Assistant and Gemini on Android have handled multi-step requests more gracefully than Siri for several years now. “Hey Google, set an alarm for 7am, remind me to call Mum at noon, and add milk to my shopping list” that kind of compound instruction has worked on Pixel devices for a while. Samsung’s Bixby has attempted similar functionality with mixed results. Apple is arriving later to this capability, but arriving with the full weight of Apple Intelligence, Gemini integration, and a rebuilt architecture behind it. Late but potentially better which is Apple’s playbook.
Discover more from Phoonomo
Subscribe to get the latest posts sent to your email.




