Sunday, December 22, 2024
HomeCryptoUC San Diego, Tsinghua University researchers just made AI way better at...

UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help

-


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


A team of computer scientists has developed a method that helps artificial intelligence understand when to use tools versus relying on built-in knowledge, mimicking how human experts solve complex problems.

The research from the University of California San Diego and Tsinghua University demonstrates a 28% improvement in accuracy when AI systems learn to balance internal knowledge with external tools — a critical capability for deploying AI in scientific work.

How scientists taught AI to make better decisions

“While integrating LLMs with tools can increase reliability, this approach typically results in over-reliance on tools, diminishing the model’s ability to solve simple problems through basic reasoning,” the researchers write in their paper. “In contrast, human experts first assess problem complexity using domain knowledge before choosing an appropriate solution approach.”

The new method, called “Adapting While Learning,” uses a two-step process to train AI systems. First, the model learns directly from solutions generated using external tools, helping it internalize domain knowledge. Then, it learns to categorize problems as either “easy” or “hard” and decides whether to use tools accordingly.

The two-step process researchers developed to teach AI systems when to use tools versus rely on internal knowledge, mirroring how human experts approach problem-solving. (Credit: UC San Diego / Tsinghua University)

Small AI model outperforms larger systems on complex tasks

What makes this development significant is its efficiency-first approach. Using a language model with just 8 billion parameters — far smaller than industry giants like GPT-4 — the researchers achieved a 28.18% improvement in answer accuracy and a 13.89% increase in tool usage precision across their test datasets. The model demonstrated particular strength in specialized scientific tasks, outperforming larger models in specific domains.

This success challenges a fundamental assumption in AI development: that bigger models necessarily yield better results. Instead, the research suggests that teaching AI when to use tools versus rely on internal knowledge — much like training a junior scientist to know when to trust their calculations versus consult specialized equipment — may be more important than raw computational power.

Examples of how the AI system handles different types of climate science problems: a simple temperature calculation (top) and a complex maritime routing challenge (bottom). (Credit: UC San Diego / Tsinghua University)

The rise of smaller, smarter AI models

This research aligns with a broader industry shift toward more efficient AI models in 2024. Major players including Hugging Face, Nvidia, OpenAI, Meta, Anthropic, and H2O.ai have all released smaller but highly capable models this year.

Hugging Face’s SmolLM2, with versions as small as 135 million parameters, can run directly on smartphones. H2O.ai’s compact document analysis models have outperformed tech giants’ larger systems on specialized tasks. Even OpenAI entered the small model arena with GPT-4o Mini, offering similar capabilities at a fraction of the cost.

This trend toward “AI downsizing” reflects growing recognition that bigger isn’t always better — specialized, efficient models can often match or exceed the performance of their larger counterparts while using far fewer computational resources.

The technical approach involves two distinct learning phases. During training, the model first undergoes what the researchers call “World Knowledge Distillation” (WKD), where it learns from solutions generated using external tools. This helps it build up internal expertise.

The second phase, “Tool Usage Adaptation” (TUA), teaches the system to classify problems based on its own confidence and accuracy in solving them directly. For simpler problems, it maintains the same approach as in WKD. But for more challenging problems, it learns to switch to using external tools.

Business impact: More efficient AI systems for complex scientific work

For enterprises deploying AI systems, this research addresses a fundamental challenge that has long plagued the industry. Current AI systems represent two extremes: they either constantly reach for external tools — driving up computational costs and slowing down simple operations — or dangerously attempt to solve everything internally, leading to potential errors on complex problems that require specialized tools.

This inefficiency isn’t just a technical issue — it’s a significant business problem. Companies implementing AI solutions often find themselves paying premium prices for cloud computing resources to run external tools, even for basic tasks their AI should handle internally. On the flip side, organizations that opt for standalone AI systems risk costly mistakes when these systems attempt complex calculations without proper verification tools.

The researchers’ approach offers a promising middle ground. By teaching AI to make human-like decisions about when to use tools, organizations could potentially reduce their computational costs while maintaining or even improving accuracy. This is particularly valuable in fields like scientific research, financial modeling, or medical diagnosis, where both efficiency and precision are crucial.

Moreover, this development suggests a future where AI systems could be more cost-effective and reliable partners in scientific work, capable of making nuanced decisions about when to leverage external resources — much like a seasoned professional who knows exactly when to consult specialized tools versus rely on their expertise.

The power of knowing when to ask for help

Beyond the immediate technical achievements, this research challenges the bigger-is-better paradigm that has dominated AI development. In demonstrating that a relatively small model can outperform its larger cousins by making smarter decisions about tool use, the team points toward a more sustainable and practical future for AI.

The implications extend far beyond academic research. As AI increasingly enters domains where mistakes carry real consequences – from medical diagnosis to climate modeling – the ability to know when to seek help becomes crucial. This work suggests a future where AI systems won’t just be powerful, but prudent – knowing their limitations just as skilled professionals do.

In essence, the researchers have taught AI something fundamentally human: sometimes the smartest decision is knowing when to ask for help.



Source link

LATEST POSTS

VW managers to get 10% pay cut in plan to slash bonuses, German newspaper reports

BERLIN (Reuters) - A reduction in managers' bonuses at Volkswagen will lead to a 10% pay cut for the next...

Bilateral investment treaty and FTAs are two separate pacts; should continue to remain so: Sources

Demands of certain developed countries from India to negotiate 'investment protection' elements under an FTA is inappropriate as negotiating the matter as part of...

Nifty Next 50 stocks to buy: Investing & trading in 2025: Better use this set of non-Nifty50 stocks to take care of the investor...

SynopsisVolatility is not bad, if one is able to learn from it. The hard fact, however, is that most of us don't want to...

HDFC Securities expects India GDP to moderate to 6.4% in 2024-25

New Delhi , December 22 (ANI): Financial advisory services firm HDFC Securities expects India's GDP to grow at 6.4 per cent in 2024-25, with...

Most Popular

spot_img