Apple at WWDC just announced Workout Buddy, which brings Apple Intelligence to fitness workouts on Apple Watch for the first time.
The new feature analyzes real-time workout data alongside users' fitness history—including heart rate, pace, distance, and Activity rings—to deliver personalized motivational insights during exercise sessions. A custom text-to-speech model converts these insights into dynamic audio coaching using voice data from Fitness+ trainers. Workout Buddy processes this data privately and securely with Apple Intelligence.
The feature requires Bluetooth headphones and an Apple Intelligence-supported iPhone nearby. Initial support covers popular workout types including Outdoor and Indoor Run, Outdoor and Indoor Walk, Outdoor Cycle, HIIT, and Functional and Traditional Strength Training. Workout Buddy launches in English first, with additional languages expected later.
Workout Buddy is included in iOS 26 and watchOS 26, which will be released to the public in the fall. Beta testing is available starting today.
iOS 26 allows users to set custom backgrounds in conversations, and it introduces the ability to create polls for voting.
In the Messages app, users can now screen messages from unknown senders. Apple says messages from unknown senders will appear in a dedicated folder, where users can mark the phone number as known, ask for more information, or delete it. These messages will remain silenced until a user accepts them.
In group chats, users can now see typing indicators, plus send and receive Apple Cash.
A new Live Translation feature in the Messages app, powered by Apple Intelligence, can translate text and audio on the fly.
ChatGPT image generation is now available directly in Image Playground, with new ChatGPT styles, such as Oil Painting, Vector, Anime, Print, and Watercolor. There is also a new Image Playground API for developers to use.
Text descriptions can now be turned into a Genmoji, and two emojis can be combined. There are more options to customize Genmojis of people with different hairstyles and expressions.
Apple has announced a major Visual Intelligence update at WWDC 2025, enabling users to search and take action on anything displayed across their iPhone apps.
The feature, which previously worked only with the camera to identify real-world objects, now analyzes on-screen content. Users can ask ChatGPT questions about what they're viewing or search Google, Etsy, and other supported apps for similar items and products.
Visual Intelligence recognizes specific objects within apps – like highlighting a lamp to find similar items online. The system also detects events on screen and suggests adding them to Calendar, automatically extracting dates, times, and locations.
Accessing the feature appears straightforward: users press the same button combination used for screenshots. They can then choose to save or share the screenshot, or explore further with Visual Intelligence.
The update basically makes Visual Intelligence more of a universal search and action tool across the entire iPhone experience. Apple says the feature builds on Apple Intelligence's on-device processing approach, maintaining user privacy while delivering contextual assistance across apps.
Apple today revealed a few major updates coming to Apple Music later this year in iOS 26, including the ability to pin content you return to frequently.
Apple calls this feature "Music pins," and it'll let you pin playlists, albums, and artists at the top of your Library tab.
Apple Music is also gaining lyrics translation and lyrics pronunciation, so that it's easier to listen to and appreciate music from all over the world.
There's also a new "AutoMix" feature that uses intelligence to transition from one song to the next like a DJ.
Apple today unveiled an update coming to CarPlay, which will support the company's new "Liquid Glass" design to align with updates coming to iOS 26 later this year.
Apple said that the new design includes a new compact view for incoming calls that allows users to see who's calling without blocking key information like upcoming directions.
Apple is also introducing the ability to quickly send Tapbacks and pinned conversations in Messages, as well as support for Widgets and Live Activities. All of these updates are also coming to CarPlay Ultra.
Apple at WWDC today unveiled Live Translation, a new feature that breaks down language barriers across its core communication apps. The capability works seamlessly within Messages, FaceTime, and Phone, powered by Apple-built models running entirely on-device to preserve privacy.
In Messages, Live Translation automatically translates text as users type, delivering messages in the recipient's preferred language. Responses are instantly translated back, making international conversations effortless.
FaceTime calls gain live caption translation, allowing users to follow along with translated text while still hearing the original speaker's voice. Phone calls take it further with spoken translation throughout the conversation.
The on-device processing ensures personal conversations remain private, with no data sent to external servers. Apple demonstrated the feature with real-time travel planning scenarios, showing how users can coordinate with friends abroad without language constraints.
Live Translation represents Apple's latest push into AI-powered communication tools, following the company's broader Apple Intelligence initiative announced at last year's WWDC.
Apple today revealed that iOS 26 is compatible with the iPhone 11 series and newer.
iOS 26 is compatible with the following iPhone models:
iPhone 16e
iPhone 16
iPhone 16 Plus
iPhone 16 Pro
iPhone 16 Pro Max
iPhone 15
iPhone 15 Plus
iPhone 15 Pro
iPhone 15 Pro Max
iPhone 14
iPhone 14 Plus
iPhone 14 Pro
iPhone 14 Pro Max
iPhone 13
iPhone 13 mini
iPhone 13 Pro
iPhone 13 Pro Max
iPhone 12
iPhone 12 mini
iPhone 12 Pro
iPhone 12 Pro Max
iPhone 11
iPhone 11 Pro
iPhone 11 Pro Max
iPhone SE (2nd generation and later)
Of course, iOS 26 will also be compatible with all future iPhone 17 models.
This means that iOS 18 is the final software version that supports the iPhone XS, iPhone XS Max, and iPhone XR, although those devices will continue to receive important security updates from Apple for at least a few more years.
If you were hoping for the more personalized version of Siri to launch soon, Apple today said that you will have to keep waiting.
During the WWDC 2025 keynote today, Apple's software engineering chief Craig Federighi said that the company will share more details about the personalized Siri features in the coming year, signaling that they are still not ready.
"As we've shared, we're continuing our work to deliver the features that make Siri even more personal," said Federighi. "This work needed more time to reach our high quality bar, and we look forward to sharing more about it in the coming year."
Apple first previewed the personalized Siri features during its WWDC 2024 keynote last June. The enhancements were initially expected to launch with iOS 18.4 a few months ago, but in March, Apple said they were delayed.
Whenever they launch, the Siri upgrades will include understanding of a user's personal context, on-screen awareness, and deeper per-app controls. For example, during its WWDC 2024 keynote, Apple showed an iPhone user asking Siri about their mother's flight and lunch reservation plans based on info from the Mail and Messages apps.
The promised Siri upgrades will require an iPhone that supports Apple Intelligence.
Apple today announced a complete redesign of all of its major software platforms called "Liquid Glass."
Announced simultaneously for iOS, iPadOS, macOS, watchOS, tvOS, visionOS, and CarPlay, Liquid Glass forms a new universal design language for the first time. At its WWDC 2025 keynote address, Apple's software chief Craig Federighi said "Apple Silicon has become dramatically more powerful enabling software, materials and experiences we once could only dream of."
Inspired by visionOS, Liquid Glass is layered throughout the system and features rounded corners have been matched to the curved screens of the devices. It behaves just like glass in the real world and morphs when you need more options or move between views.
App icons have been totally redesigned with multiple layers of Liquid Glass, and there is a new clear look that sits alongside light mode and dark mode. Apple also showcased design changes to the Camera app, Photos, Safari, Phone, FaceTime, and more.
The Lock Screen now features options for a clock that dynamically changes in size depending on how much space is available, 3D photos, and more. Animated album art can now take up the entire Lock Screen. Apple says the new Liquid Glass design language sets the stage for years to come.
Apple at WWDC today announced Foundation Models Framework, a new API allowing third-party developers to leverage the large language models at the heart of Apple Intelligence and build it into their apps.
With the Foundation Models Framework, developers can integrate Apple's on-device models directly into apps, allowing them to build on Apple Intelligence.
"Last year, we took the first steps on a journey to bring users intelligence that's helpful, relevant, easy to use, and right where users need it, all while protecting their privacy. Now, the models that power Apple Intelligence are becoming more capable and efficient, and we're integrating features in even more places across each of our operating systems," said Craig Federighi, Apple's senior vice president of Software Engineering. "We're also taking the huge step of giving developers direct access to the on-device foundation model powering Apple Intelligence, allowing them to tap into intelligence that is powerful, fast, built with privacy, and available even when users are offline. We think this will ignite a whole new wave of intelligent experiences in the apps users rely on every day. We can't wait to see what developers create."
The Foundation Models framework lets developers build AI-powered features that work offline, protect privacy, and incur no inference costs. For example, an education app can generate quizzes from user notes on-device, and an outdoors app can offer offline natural language search.
Apple says the framework is available for testing starting today through the Apple Developer Program at developer.apple.com, and a public beta will be available through the Apple Beta Software Program next month at beta.apple.com. It includes built-in features like guided generation and tool calling for easy integration of generative capabilities into existing apps.
Apple's Worldwide Developers Conference (WWDC) starts today with the traditional keynote kicking things off at 10:00 a.m. Pacific Time. MacRumors is on hand for the event and we'll be sharing details and our thoughts throughout the day.
We're expecting to see a number of software-related announcements led by a design revamp across Apple's platforms that will also see the numbering of all of them unified under "26" branding. We should also be hearing more about Apple's AI initiatives, although perhaps a bit more restrained compared to last year's ambitious unveilings that have yet to fully come to fruition.
Apple is providing a live video stream on its website, on YouTube, and in the company's TV and Developer apps across its platforms. We will also be updating this article with live blog coverage and issuing Twitter updates through our @MacRumorsLive account as the keynote unfolds. Highlights from the event and separate news stories regarding today's announcements will go out through our @MacRumors account.
Sign up for our newsletter to keep up with Apple news and rumors.
Apple's WWDC 2025 begins today, with the event kicking off at 10:00 am Pacific Time via the traditional opening keynote. We know that some MacRumors readers who can't follow the event as it's being broadcast are interested in avoiding all of the announcements and waiting until the event video is available for on-demand viewing so as to experience it without already knowing the outcome.
For those individuals, we've posted this news story, which will be updated with a direct link to the presentation once it becomes available from Apple. No other news stories or announcements will be displayed alongside this story.
Replays of Apple's recent events have been made available to view almost immediately following the conclusion of the broadcasts, and we expect similar timing for today's event.
Users waiting for the video to be posted are welcome to gather in the thread associated with this news story, and we ask that those who follow the events as they occur refrain from making any posts about Apple's announcements in this thread.
Apple recently announced that it will be opening a freshly remodeled store at La Encantada in Tucson, Arizona, on Saturday, June 14.
The grand re-opening will take place at 10 a.m. local time:
We're making moves. Come with us. Apple La Encantada opens June 14, at 10:00 a.m.
Apple La Encantada first opened in 2004, and it closed in November 2024 for renovations. Apple opened a temporary store at the outdoor mall in the interim, and now the original location will be reopening with a modern design and a larger layout than before.
We have not seen any photos of the remodeled store, but expect more wood paneling and an Apple Pickup station for online orders.
A newly published Apple Machine Learning Research study has challenged the prevailing narrative around AI "reasoning" large-language models like OpenAI's o1 and Claude's thinking variants, revealing fundamental limitations that suggest these systems aren't truly reasoning at all.
For the study, rather than using standard math benchmarks that are prone to data contamination, Apple researchers designed controllable puzzle environments including Tower of Hanoi and River Crossing. This allowed a precise analysis of both the final answers and the internal reasoning traces across varying complexity levels, according to the researchers.
The results are striking, to say the least. All tested reasoning models – including o3-mini, DeepSeek-R1, and Claude 3.7 Sonnet – experienced complete accuracy collapse beyond certain complexity thresholds, and dropped to zero success rates despite having adequate computational resources. Counterintuitively, the models actually reduce their thinking effort as problems become more complex, suggesting fundamental scaling limitations rather than resource constraints.
Perhaps most damning, even when researchers provided complete solution algorithms, the models still failed at the same complexity points. Researchers say this indicates the limitation isn't in problem-solving strategy, but in basic logical step execution.
Models also showed puzzling inconsistencies – succeeding on problems requiring 100+ moves while failing on simpler puzzles needing only 11 moves.
The research highlights three distinct performance regimes: standard models surprisingly outperform reasoning models at low complexity, reasoning models show advantages at medium complexity, and both approaches fail completely at high complexity. The researchers' analysis of reasoning traces showed inefficient "overthinking" patterns, where models found correct solutions early but wasted computational budget exploring incorrect alternatives.
The take-home of Apple's findings is that current "reasoning" models rely on sophisticated pattern matching rather than genuine reasoning capabilities. It suggests that LLMs don't scale reasoning like humans do, overthinking easy problems and thinking less for harder ones.
The timing of the publication is notable, having emerged just days before WWDC 2025, where Apple is expected to limit its focus on AI in favor of new software designs and features, according to Bloomberg.
Just hours away from WWDC’s opening keynote, some developers have been sharing the contents of their conference swag bags on social media. The bags are given to attendees when they register for the event, and typically contain limited-edition Apple gifts.
This year, developers have been registering at Apple's Infinite Loop campus, where they have been gifted a black tote bag emblazoned with the WWDC 2025 logo, along with a gun-metal black drinks flask, a purple lanyard, and collectible enamel pins.
Apple introduced the popular pin packs at WWDC 2017 and kicked off collections with the old rainbow-themed Apple logo, the "hello" Mac greeting, the Swift and Metal logos, the original Macintosh, and emojis. Attendees also received a flag pin of their home country.
Among the various pins this year are the Apple Intelligence logo, the "hello" Mac greeting, the Metal logo, California roses, and what looks like an octopus emoji. Attendees also receive a WWDC 25 badge.
MacRumors will be in attendance at the keynote, with live coverage of the event beginning shortly after 10:00 a.m. Pacific Time. Stay tuned to MacRumors.com and our @MacRumorsLive account on X (Twitter). We've also put together a guide explaining all the ways you can watch Apple's WWDC 2025 Keynote live as it happens.