Web browsers now commonly sport AI services provided by on-device or cloud-based models. However, a few holdouts remain convinced it’s a bad idea.
Vivaldi Technologies, maker of the Chromium-based Vivaldi browser, in February took a stand, declaring that it won’t implement large language models (LLMs) in its browser until their deficiencies have been addressed – which could be a while.
“LLMs are essentially confident-sounding lying machines with a penchant to occasionally disclose private data or plagiarize existing work,” Julien Picalausa, a software developer at Vivaldi, said in a memo to users. “While they do this, they also use vast amounts of energy and are happy using all the GPUs you can throw at them, which is a problem we’ve seen before in the field of cryptocurrencies.”
When we ask our users whether they want AI, the answer is a pretty clear no
Vivaldi CEO Jon von Tetzchner told The Register, “When we ask our users whether they want AI, the answer is a pretty clear no. The users do not see the value and neither do we. We are also concerned about this leading to more data collection and user profiling. AI is in many ways the next step in the surveillance economy and we would rather see things reversed there.”
Von Tetzchner acknowledges that AI can be genuinely useful, for applications including translation and voice recognition, and in various forms of research that relies on pattern recognition.
“But when it comes to building it into the browser, it becomes another way to watch what you are doing and build a profile, locally or in the cloud,” he said. “We see that as a massive security issue and we see that our users overall see this as something they would rather stay away from. Kind of like with ‘personalized ads,’ companies are pushing solutions that users would rather avoid.”
It becomes another way to watch what you are doing and build a profile, locally or in the cloud
That’s evident from the pushback Mozilla has received from users who object to its plans to integrate AI services into Firefox.
As one forum participant remarked a few days ago, “A chatbot can never truly endorse or respect privacy as their entire design is propped up on the denial thereof – an outright hostile harvesting of the entire internet with zero regard to the damage caused. Your inclusion of AI features in [F]irefox is a fundamental endorsement of that denial of privacy. This kind of naked trend-chasing erodes [Firefox’s] identity as a browser, and a Firefox that’s philosophically indistinguishable from Chrome is a Firefox that has no future at all.”
In July, an issue published to the repository of LibreWolf, an independent version of Mozilla’s Firefox, asked that Firefox’s ongoing implementation of AI be removed or disabled in LibreWolf. There’s now a pull request to make that happen.
And of course, there are other niche browsers that aren’t embedding AI at all, either.
- Microsoft exec warns of business functions being sacrificed on the altar of AI
- Feds urge 3D printing industry to end DIY machine guns
- Mainframes aren’t dead, they’re just learning AI tricks
- Defense AI models ‘a risk to life’ alleges spurned tech firm
However, other browser makers have already committed, indifferent to arguments that AI is fundamentally unethical content laundering, among other flaws. Microsoft now refers to Edge as “your AI-Powered browser.” Brave has implemented Leo, a chatbot that relies on Brave-hosted models and promises privacy. Opera has a chatbot service called Aria.
Apple is preparing to roll out Apple Intelligence across its various operating systems, and its Safari browser in iOS 18.1 will gain a webpage summarization option in Reader mode.
Google has even more ambitious plans. The Chocolate Factory has made its built-in Gemini Nano model available in Chrome to early preview program participants. And it’s developing a Prompt API so developers can issue instructions and receive responses, as well as APIs for summarization, writing, and rewriting. You may also have seen or activated the “Help me write” prompt in Chrome, powered by its Gemini generative AI.
Using these browser APIs, which are similar to APIs already offered by model vendors like OpenAI and Anthropic, web developers will be able to create applications that interact with whatever LLMs have been made available through the compliant browsers – not just Gemini Nano.
That could be a problem. As stated in the Prompt API explainer, “We do not intend to provide guarantees of language model quality, stability, or interoperability between browsers. In particular, we cannot guarantee that the models exposed by these APIs are particularly good at any given use case.”
So web developers, as things currently stand, have to anticipate how their application might respond given the many different models a Chromium browser might make available.
There is a proposal to deal with this issue by allowing developers to register a specific model with the browser as an extension, so that AI services could then be invoked from a known source. But this is one of many unresolved concerns that will need to be dealt with, such as AI models increasing the attack surface for fingerprinting and other security worries.
Like it or not, AI has arrived in the browser, or most of them. Choose wisely. ®
Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here