We interact all day with rectangles—in our hands, on our desks, and on our walls. And as we’re typing, listening, watching, scrolling, and clicking away, the makers of these devices are engaged in a never-ending battle for dominance. The shapes may be the same, but the software, that intangible digital essence that flows through them, is not. For the majority of the past decade, software has been one of the most defining differentiating factors between devices, but in the past six months, AI capabilities have begun to matter most. It’s no longer about how you can touch them, but how you can talk to them.
This was clearly evident this week at Apple’s Worldwide Developers Conference in Cupertino, California, as the company announced dozens of new AI features, all part of an umbrella of products called Apple Intelligence, and played catch-up with its competitors. While Google has made numerous missteps with its AI rollouts, the phone has not been one of them. It was one of the first to launch an AI-featured phone in October 2023, through the company’s flagship Pixel 8 and Pixel 8 Pro. These devices run Google’s Gemini Nano AI, which enables a range of AI-processing features directly on the phones, including AI-generated summaries of voice recordings and suggested replies in messaging apps, and a slew of object recognition tools for visual search. Since then, smartphones with built-in AI, like the Samsung Galaxy S24 Ultra and the Xiaomi 14 Ultra, have been unveiled.
On Monday, Apple showcased a vastly improved Siri (if anyone needed an upgrade and a spa day it was Siri) that can engage in free-form conversations and handle complex queries with nuance. Additionally, the company announced Writing Tools, which can summarize text and even give you a shortened version of a long group thread (thank you, Lord!). You can also create Genmojis, which are customized emojis that sound fun but will likely result in a lot of people texting back and asking, “What the hell is that supposed to be?” (Thankfully, Apple’s AI will be able to tell you.) The company also partnered with OpenAI for some of the more advanced questions, allowing Siri to tap into ChatGPT’s expertise for more complex tasks such as answering questions about photos or documents, or helping Siri answer questions it doesn’t know the answer to. (More on this partnership later.)
Clearly, Wall Street investors, who get excited about anything with the term AI in it, were giddy with the latest updates from Apple. For a brief moment on Wednesday following the announcement, Apple surpassed Microsoft to become the most valuable company in the world, a slot that Apple once held but lost to Microsoft when it, too, got the AI bump after an obsessive focus on integrating artificial intelligence into its products. These days, a hardware tweak isn’t going to generate such enthusiasm from investors—or likely, consumers.
Case in point, earlier this month I got to try out the new 12.9-inch Apple iPad. I had planned to write about Apple’s latest rectangle, and to offer some sort of analysis on what it meant for the company. And while it’s always nice to work on a smaller, thinner, lighter, faster iPad, I was frankly underwhelmed by the updates to the hardware because at the end of the day, it’s how the software integrates with those updates that make one device better than another. At WWDC, Apple announced several new features under the banner of Apple Intelligence for iPadOS 18. These include a revamped Siri that can engage in more natural conversations and handle complex queries directly on the latest iPads, new writing tools that allow users to rewrite and proofread text, and the new Math Notes feature that turns the iPad into an interactive blackboard so it can solve simple or complex problems with live updates. Not to mention Smart Script in Notes, which lets users scribble out their work in their own handwriting, which is then turned into actual text—though it’s still TBD if Apple’s AI can read my chicken scratch handwriting.
Not everyone was happy with Apple’s announcements this week. The internet was filled with plenty of Apple obsessives who complained (yes, there are people on the internet who complain about things they have no involvement with) that the new AI features announced by the company were tepid at best, that the name Apple Intelligence was “cringe,” and that the look and feel of the graphics were “uninspired.” But Apple’s foray into AI is a much-needed counterbalance to the frenetic pace set by Silicon Valley. While other tech giants are in a mad dash, rolling out AI advancements without appearing to fully consider the repercussions (ahem, OpenAI) Apple is taking a much-needed slow and methodical approach. It is being “cringe” and “uninspired” and “tepid,” but honestly, that’s what the industry needs right now: a focus on thoughtful, well-implemented innovations rather than a rush to be first (ahem, Microsoft), even at the risk of causing societal harm (ahem, OpenAI and Microsoft).
Tony Fadell, who was at Apple for most of his career and was behind the iPhone and iPod, noted that Apple is taking “baby steps” into AI, and that’s exactly what it should be doing. “Today’s AI LLMs [large language models] are mostly glorified demos for the really interesting applications. They are turning into a commodity because they’re overfunded by FOMO-driven VCs who don’t truly understand the technology limitations that drive real application requirements,” Fadell wrote on X. “Hallucinations are a real problem and there is no fundamental way to get rid of them. The expectations of customers are so much higher than what can be delivered today.”
Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here