Opinion
20 Mar 20245 mins
Artificial IntelligenceBusiness
AI is revolutionizing the way we live and work, but critical ethical and production concerns will cause roadblocks.
Since inception, artificial intelligence (AI) has been changing fast. With the introduction of ChatGPT, DALL-E, and other generative AI tools, 2023 emerged as a year of great progress, putting AI into the hands of the masses. Even in all its glory, we’re also at an inflection point.
AI will revolutionize industries and augment human capabilities, but it will also raise important ethical questions. We’ll have to think critically about whether easier and faster AI-powered tasks are better—or just easier and faster. Are the same tools high school students are using to write their papers the ones we can rely upon to power enterprise-grade applications?
The short answer is no, but the hype might lend itself to another story. It’s clear that AI is primed for another landmark year, but it’s how we navigate the challenges it brings that will determine its true value. Here are three potential growing pains business leaders should keep in mind as they embark on their AI journey in 2024.
LLMs will cause struggles
Prompt engineering is one thing, but implementing applications of Large Language Models (LLMs) that result in accurate, enterprise-grade results is harder than initially advertised. LLMs have promised to make AI tasks smarter, smoother, and more scalable than ever, but getting them to operate efficiently is a roadblock many businesses will face. While getting started is simple, accuracy and reliability are not yet acceptable for enterprise use.
Dealing with robustness, fairness, bias, truthfulness, and data leakage takes a lot of work—and all are prerequisites for getting LLMs into production safely. Take healthcare for example. Recent academic research found that GPT models performed poorly in critical tasks, like named entity recognition (NER) and de-identification. In fact, healthcare-specific model PubMedBERT significantly outperformed both LLM models in NER, relation extraction, and multi-label classification tasks.
Cost is another major concern for GPT models for such tasks. Some LLMs are two orders of magnitude more expensive than smaller models. Continuing on with the healthcare example, with the amount of clinical information to analyze, this significantly reduces the economic viability of GPT-based solutions. And as a result, we’ll unfortunately see many LLM-specific projects stall or fail entirely.
Domain specification is no longer a nice-to-have
Using OpenAI to ask questions in a domain-specific setting, like healthcare or the legal industry, isn’t enough. There are certain tasks that can’t be solved by simply tuning models. In this instance, engineering and domain expertise is crucial. You wouldn’t ask a data scientist to perform a surgery—don’t expect AI to carry out industry-specific tasks without a professional at the helm.
According to a survey from Gradient Flow, when asked about intended users for AI tools and technologies, over half of respondents identified clinicians (61%) as target users, and close to half indicated that healthcare providers (45%) are among their target users. Additionally, a higher rate of technical leaders cited healthcare payers and drug development professionals as potential users of AI applications.
It’s likely that the shift from data scientist to domain expertise will continue in healthcare and beyond, especially with the uptick of low- and no-code tools. This is an important development, as democratizing AI will open the doors for more users to drive innovation. But as it stands, the best results occur when engineers and domain experts work in tandem.
Responsible AI is becoming SOP
Another challenge we’ll face, albeit a long overdue and positive one, is ethical regulations coming to light. Legal precedents and guidelines that prioritize vendor responsibility will become a standard business requirement for the use of AI tools. We’re already seeing this materialize with Biden’s Executive Order on AI and the UK’s AI regulations.
It’s an important step, especially considering third-party AI tools are responsible for over half (55%) of AI-related failures in organizations, recent research from MIT Sloan Management Review and Boston Consulting Group found. The consequences of these failures include reputational damage, financial losses, loss of consumer trust, and litigation. This highlights the need for vendor responsibility and consequences if proper measures aren’t taken.
Although the road to production may be longer than before, there’s little economic value in investing in solutions that can ultimately hurt your business. If you sell software, you’re directly accountable for what it does in production. Adhering to ethical AI standards is no longer just the right thing to do—it’s illegal not to, and will become standard operating procedure, as it should be.
While the road ahead may be rocky, 2024 will be another defining one for AI. Innovation is moving faster than ever, but it’s vital to consider whether we’re doing more good than harm. Although we’re bound to experience some real industry growing pains, it’s likely to be another breakthrough year for AI.
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here