
While we did not hear much about Siri or Apple Intelligence during the 2025 Apple event, new iphones, airpods and Apple watches were launched, two huge AI characteristics were announced which have slipped under the radar on a large scale. This is mostly because they were presented as great new features and the AI marketing language was not used.
Nevertheless, I got to demo a demo of both features in Apple Park earlier this month and my first belief was that both are almost fully cooked and are ready to start improvement in daily life – and they have my favorite types of features to talk.
Too: 5 new AI-operated facilities flying under the radar at Apple’s launch event
What are they here:
1. A selfie camera automatically frames the best shot
Apple has implemented a new type of front-facing camera. It uses a square sensor and increases the resolution of the sensor from 12 megapixels on the previous model. However, because the sensor is class, it actually outputs 18MP images. The real move is that it can either output in vertical or horizontal formats.
In fact, you do not need to turn on the phone to switch between vertical and horizontal mode. Now you can simply keep the phone in one hand and in the vertical or horizontal position you like, you can tap the rotate button and it will flip from the vertical and vice versa from the vertical. And because it has an ultravide sensor with megapixel, it can either take the same crisp photos in orientation.
Now, here is where AI comes in. You can set a front-facing camera for auto zoom and auto rotate. Then, it will automatically find a face in your shot and it will widen or tighten the shot and decide whether a vertical or horizontal orientation will work best to fit everyone in the photo. Apple calls it your “center stage” feature, which is the same word that uses you to concentrate you in the middle of the screen for video calls on iPad and MAC.
Too: Comparison of every iPhone 17 model
This feature technically uses machine learning, but it is still appropriate to call it AI feature. The center stage branding is slightly misleading, as the selfie camera on the iPhone 17 is used for photos, while the facility on iPad and Mac is for video calls. The selfie camera feature is also targeted on photos with many people, while the iPad/Mac feature is mainly used in the frame only with you.
Nevertheless, after trying on various demo iPhones at Apple Park after Kenot on Tuesday, I saw it easy to say the smartest and best selfie camera. I have no doubt that other phone manufacturers will start copying this facility in 2026. And the best thing is that Apple did not limit it to this year’s high-end iPhone 17 Pro and Pro Max, but also placed it on standard iPhone 17 and iPhone Air. This is very good news for consumers, who took 500 billion selfies on iPhones as per Apple last year.
2. Live translation in AirPods Pro 3
I have said many times before, but language translation is one of the best and most accurate uses for large language models. In fact, this is one of the best things you can do with generous AI.
This has enabled companies like Apple that leave Google and others far behind in their language translation apps to carry forward great progress in implementing language translation features in major products. While Google translation 249 supports different languages, Apple’s translation supports app 20. Google translation is around 2006 while Apple’s translation app was launched in 2020.
Too: Airpods Pro 3 vs AirPods Pro 2: Who should upgrade here
Nevertheless, while Google has been demo for years, showing real -time translation in its phones and earbuds, any feature has never done a great job in the real world. Google again spoke a big talk about real -time translation during its Pixel 10 launch event in August, but even during the demo on stage Feature a little hiccup,
Enter Apple. Along with AirPods Pro 3, it launched its live translation feature. And while the number of languages is not as broad, the implementation is much more smooth and more user friendly. I found a one-person demo with a group of other journalists at Apple Park after Kenot on Tuesday.
A Spanish speaker came into the room and started speaking, while our ears had Airpods Pro 3 and we had an iPhone 17 Pro in our hands, with a live translation feature. Airpods Pro 3 immediately went into ANC mode and started translating the words spoken in Spanish into English so that we can look at the person’s face during the conversation while listening to their words in our language without the distraction of listening to words in both languages. And I know enough Spanish to understand that the translation was very accurate – no hiccups in this case.
It was only a brief demo, but traded Google’s Pixel 10 demo (trying to put in an A-clone voice in a cleid voice of words and speaker for more practical purposes). The version of Apple is a beta feature that will be limited to English, Spanish, French, German and Portuguese to start.
Also: I replaced my AirPods Max with AirPods Pro 3, and did not feel like a $ 300 gap
And to be clear, the iPhone serves most processing, while airpods improve the experience to cancel the noise automatically. But because of that, This feature is not limited to only AirPods Pro 3It will also work with AirPods Pro 2 and AirPods 4, as long as they are connected A phone that can run Apple Intelligence (iPhone 15 Pro or New). This is another win for consumers.
Since I plan to test with Pixel Buds Pro 2 with both Pixel 10 Pro XL as well as Airpods Pro 3 with iPhone 17 Pro Max, I will put their new translation features head-to-head to how well they perform. Every week, I am in touch with friends and community members who speak one or more supported languages, so I will have enough opportunities to work in the real world and then share what I learn.

