The disagreement issue in post hoc feature attribution techniques is discussed in this study. Explainers like SHAP, LIME, and gradient-based techniques frequently result in contradictory feature importance rankings for the same model. Post hoc Explainer Agreement Regularization (PEAR), a loss term added after model training, is introduced to counteract this and promote increased explainer consensus without significantly compromising accuracy. Experiments on three datasets show that PEAR offers a customizable balance between explanation consensus and predictive performance, and it enhances agreement across explainers, including those not directly used in training. PEAR improves explanations' dependability and credibility in crucial machine learning applications by turning disagreement into a controlled parameter.The disagreement issue in post hoc feature attribution techniques is discussed in this study. Explainers like SHAP, LIME, and gradient-based techniques frequently result in contradictory feature importance rankings for the same model. Post hoc Explainer Agreement Regularization (PEAR), a loss term added after model training, is introduced to counteract this and promote increased explainer consensus without significantly compromising accuracy. Experiments on three datasets show that PEAR offers a customizable balance between explanation consensus and predictive performance, and it enhances agreement across explainers, including those not directly used in training. PEAR improves explanations' dependability and credibility in crucial machine learning applications by turning disagreement into a controlled parameter.

New AI Study Tackles the Transparency Problem in Black-Box Models

:::info Authors:

(1) Avi Schwarzschild, University of Maryland, College Park, Maryland, USA and Work completed while working at Arthur (avi1umd.edu);

(2) Max Cembalest, Arthur, New York City, New York, USA;

(3) Karthik Rao, Arthur, New York City, New York, USA;

(4) Keegan Hines, Arthur, New York City, New York, USA;

(5) John Dickerson†, Arthur, New York City, New York, USA ([email protected]).

:::

Abstract and 1. Introduction

1.1 Post Hoc Explanation

1.2 The Disagreement Problem

1.3 Encouraging Explanation Consensus

  1. Related Work

  2. Pear: Post HOC Explainer Agreement Regularizer

  3. The Efficacy of Consensus Training

    4.1 Agreement Metrics

    4.2 Improving Consensus Metrics

    [4.3 Consistency At What Cost?]()

    4.4 Are the Explanations Still Valuable?

    4.5 Consensus and Linearity

    4.6 Two Loss Terms

  4. Discussion

    5.1 Future Work

    5.2 Conclusion, Acknowledgements, and References

Appendix

ABSTRACT

As neural networks increasingly make critical decisions in highstakes settings, monitoring and explaining their behavior in an understandable and trustworthy manner is a necessity. One commonly used type of explainer is post hoc feature attribution, a family of methods for giving each feature in an input a score corresponding to its influence on a model’s output. A major limitation of this family of explainers in practice is that they can disagree on which features are more important than others. Our contribution in this paper is a method of training models with this disagreement problem in mind. We do this by introducing a Post hoc Explainer Agreement Regularization (PEAR) loss term alongside the standard term corresponding to accuracy, an additional term that measures the difference in feature attribution between a pair of explainers. We observe on three datasets that we can train a model with this loss term to improve explanation consensus on unseen data, and see improved consensus between explainers other than those used in the loss term. We examine the trade-off between improved consensus and model performance. And finally, we study the influence our method has on feature attribution explanations.

1 INTRODUCTION

As machine learning becomes inseparable from important societal sectors like healthcare and finance, increased transparency of how complex models arrive at their decisions is becoming critical. In this work, we examine a common task in support of model transparency that arises with the deployment of complex black-box models in production settings: explaining which features in the input are most influential in the model’s output. This practice allows data scientists and machine learning practitioners to rank features by importance – the features with high impact on model output are considered more important, and those with little impact on model output are considered less important. These measurements inform how model users debug and quality check their models, as well as how they explain model behavior to stakeholders.

1.1 Post Hoc Explanation

The methods of model explanation considered in this paper are post hoc local feature attribution scores. The field of explainable artificial intelligence (XAI) is rapidly producing different methods of this

\ Figure 1: Our loss that encourages explainer consensus boosts the correlation between LIME and other common post hoc explainers. This comes with a cost of less than two percentage points of accuracy compared with our baseline model on the Electricity dataset. Our method improves consensus on six agreement metrics and all pairs of explainers we evaluated. Note that this plot measures the rank correlation agreement metric and the specific bar heights depend on this choice of metric.

\ type to make sense of model behavior [e.g., 21, 24, 30, 32, 37]. Each of these methods has a slightly different formula and interpretation of its raw output, but in general they all perform the same task of attributing a model’s behavior to its input features. When tasked to explain a model’s output with a corresponding input (and possible access to the model weights), these methods answer the question, “How influential is each individual feature of the input in the model’s computation of the output?”

\ Data scientists are using post hoc explainers at increasing rates – popular methods like LIME and SHAP have had over 350 thousand and 6 million downloads of their Python packages in the last 30 days, respectively [23].

1.2 The Disagreement Problem

The explosion of different explanation methods leads Krishna et al. [15] to observe that when neural networks are trained naturally, i.e. for accuracy alone, often post hoc explainers disagree on how much different features influenced a model’s outputs. They coin the term the disagreement problem and argue that when explainers disagree about which features of the input are important, practitioners have little concrete evidence as to which of the explanations, if any, to trust.

\ There is an important discussion around local explainers and their true value in reaching the communal goal of model transparency and interpretability [see, e.g., 7, 18, 29]; indeed, there are ongoing discussions about the efficacy of present-day explanation methods in specific domains [for healthcare see, e.g., 8]. Feature importance estimates may fail at making a model more transparent when the model being explained is too complex to allow for easily attributing the output to the contribution of each individual feature.

\ In this paper, we make no normative judgments with respect to this debate, but rather view “explanations” as signals to be used alongside other debugging, validation, and verification approaches in the machine learning operations (MLOps) pipeline. Specifically, we take the following practical approach: make the amount of explanation disagreement a controllable model parameter instead of a point of frustration that catches stakeholders off-guard.

1.3 Encouraging Explanation Consensus

Consensus between two explainers does not require that the explainers output the same exact scores for each feature. Rather, consensus between explainers means that whatever disagreement they exhibit can be reconciled. Data scientists and machine learning practitioners say in a survey that explanations are in basic agreement if they satisfy agreement metrics that align with human intuition, which provides a quantitative way to evaluate the extent to which consensus is being achieved [15]. When faced with disagreement between explainers, a choice has to be made about what to do next – if such an arbitrary crossroads moment is avoidable via specialized model training, we believe it would be a valuable addition to a data scientist’s toolkit.

\ We propose, as our main contribution, a training routine to help alleviate the challenge posed by post hoc explanation disagreement. Achieving better consensus between explanations does not provide more interpretability to a model inherently. But, it may lend more trust to the explanations if different approaches to attribution agree more often on which features are important. This gives consensus the practical benefit of acting as a sanity check – if consensus is observed, the choice of which explainer a practitioner uses is less consequential with respect to downstream stakeholder impact, making their interpretation less subjective.

2 RELATED WORK

Our work focuses on post hoc explanation tools. Some post hoc explainers, like LIME [24] and SHAP [21], are proxy models trained atop a base machine learning model with the sole intention of “explaining” that base model. These explainers rely only on the model’s inputs and outputs to identify salient features. Other explainers, such as Vanilla Gradients (Grad) [32], Gradient Times Input (Grad*Input) [30], Integrated Gradients (IntGrad) [37] and SmoothGrad [34], do not use a proxy model but instead compute the gradients of a model with respect to input features to identify important features.[1] Each of these explainers has its quirks and there are reasons to use, or not use, them all—based on input type, model type, downstream task, and so on. But there is an underlying pattern unifying all these explanation tools. Han et al. [12] provide a framework that characterizes all the post hoc explainers used in this paper as different types of local-function approximation. For more details about the individual post hoc explainers used in this paper, we refer the reader to the individual papers and to other works about when and why to use each one [see, e.g., 5, 13].

\ We build directly on prior work that defines and explores the disagreement problem [15]. Disagreement here refers to the difference in feature importance scores between two feature attribution methods, but can be quantified several different ways as are described by the metrics Krishna et al. [15] define and use. We describe these metrics in Section 4.

\ The method we propose in this paper relates to previous work that trains models with constraints on explanations via penalties on the disagreement between feature attribution scores and handcrafted ground-truth scores [26, 27, 41]. Additionally, work has been done to leverage the disagreement between different posthoc explanations to construct new feature attribution scores that improve metrics like stability and pairwise rank agreement [2, 16, 25].

\

:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\

Market Opportunity
BLACKHOLE Logo
BLACKHOLE Price(BLACK)
$0.03979
$0.03979$0.03979
-6.57%
USD
BLACKHOLE (BLACK) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Report: Galaxy to Launch $100 Million Crypto Hedge Fund in Q1

Report: Galaxy to Launch $100 Million Crypto Hedge Fund in Q1

The post Report: Galaxy to Launch $100 Million Crypto Hedge Fund in Q1 appeared on BitcoinEthereumNews.com. Galaxy is launching a $100 million hedge fund to trade
Share
BitcoinEthereumNews2026/01/21 19:49
Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025

Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025

BitcoinWorld Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025 Are you ready to witness a phenomenon? The world of technology is abuzz with the incredible rise of Lovable AI, a startup that’s not just breaking records but rewriting the rulebook for rapid growth. Imagine creating powerful apps and websites just by speaking to an AI – that’s the magic Lovable brings to the masses. This groundbreaking approach has propelled the company into the spotlight, making it one of the fastest-growing software firms in history. And now, the visionary behind this sensation, co-founder and CEO Anton Osika, is set to share his invaluable insights on the Disrupt Stage at the highly anticipated Bitcoin World Disrupt 2025. If you’re a founder, investor, or tech enthusiast eager to understand the future of innovation, this is an event you cannot afford to miss. Lovable AI’s Meteoric Ascent: Redefining Software Creation In an era where digital transformation is paramount, Lovable AI has emerged as a true game-changer. Its core premise is deceptively simple yet profoundly impactful: democratize software creation. By enabling anyone to build applications and websites through intuitive AI conversations, Lovable is empowering the vast majority of individuals who lack coding skills to transform their ideas into tangible digital products. This mission has resonated globally, leading to unprecedented momentum. The numbers speak for themselves: Achieved an astonishing $100 million Annual Recurring Revenue (ARR) in less than a year. Successfully raised a $200 million Series A funding round, valuing the company at $1.8 billion, led by industry giant Accel. Is currently fielding unsolicited investor offers, pushing its valuation towards an incredible $4 billion. As industry reports suggest, investors are unequivocally “loving Lovable,” and it’s clear why. This isn’t just about impressive financial metrics; it’s about a company that has tapped into a fundamental need, offering a solution that is both innovative and accessible. The rapid scaling of Lovable AI provides a compelling case study for any entrepreneur aiming for similar exponential growth. The Visionary Behind the Hype: Anton Osika’s Journey to Innovation Every groundbreaking company has a driving force, and for Lovable, that force is co-founder and CEO Anton Osika. His journey is as fascinating as his company’s success. A physicist by training, Osika previously contributed to the cutting-edge research at CERN, the European Organization for Nuclear Research. This deep technical background, combined with his entrepreneurial spirit, has been instrumental in Lovable’s rapid ascent. Before Lovable, he honed his skills as a co-founder of Depict.ai and a Founding Engineer at Sana. Based in Stockholm, Osika has masterfully steered Lovable from a nascent idea to a global phenomenon in record time. His leadership embodies a unique blend of profound technical understanding and a keen, consumer-first vision. At Bitcoin World Disrupt 2025, attendees will have the rare opportunity to hear directly from Osika about what it truly takes to build a brand that not only scales at an incredible pace in a fiercely competitive market but also adeptly manages the intense cultural conversations that inevitably accompany such swift and significant success. His insights will be crucial for anyone looking to understand the dynamics of high-growth tech leadership. Unpacking Consumer Tech Innovation at Bitcoin World Disrupt 2025 The 20th anniversary of Bitcoin World is set to be marked by a truly special event: Bitcoin World Disrupt 2025. From October 27–29, Moscone West in San Francisco will transform into the epicenter of innovation, gathering over 10,000 founders, investors, and tech leaders. It’s the ideal platform to explore the future of consumer tech innovation, and Anton Osika’s presence on the Disrupt Stage is a highlight. His session will delve into how Lovable is not just participating in but actively shaping the next wave of consumer-facing technologies. Why is this session particularly relevant for those interested in the future of consumer experiences? Osika’s discussion will go beyond the superficial, offering a deep dive into the strategies that have allowed Lovable to carve out a unique category in a market long thought to be saturated. Attendees will gain a front-row seat to understanding how to identify unmet consumer needs, leverage advanced AI to meet those needs, and build a product that captivates users globally. The event itself promises a rich tapestry of ideas and networking opportunities: For Founders: Sharpen your pitch and connect with potential investors. For Investors: Discover the next breakout startup poised for massive growth. For Innovators: Claim your spot at the forefront of technological advancements. The insights shared regarding consumer tech innovation at this event will be invaluable for anyone looking to navigate the complexities and capitalize on the opportunities within this dynamic sector. Mastering Startup Growth Strategies: A Blueprint for the Future Lovable’s journey isn’t just another startup success story; it’s a meticulously crafted blueprint for effective startup growth strategies in the modern era. Anton Osika’s experience offers a rare glimpse into the practicalities of scaling a business at breakneck speed while maintaining product integrity and managing external pressures. For entrepreneurs and aspiring tech leaders, his talk will serve as a masterclass in several critical areas: Strategy Focus Key Takeaways from Lovable’s Journey Rapid Scaling How to build infrastructure and teams that support exponential user and revenue growth without compromising quality. Product-Market Fit Identifying a significant, underserved market (the 99% who can’t code) and developing a truly innovative solution (AI-powered app creation). Investor Relations Balancing intense investor interest and pressure with a steadfast focus on product development and long-term vision. Category Creation Carving out an entirely new niche by democratizing complex technologies, rather than competing in existing crowded markets. Understanding these startup growth strategies is essential for anyone aiming to build a resilient and impactful consumer experience. Osika’s session will provide actionable insights into how to replicate elements of Lovable’s success, offering guidance on navigating challenges from product development to market penetration and investor management. Conclusion: Seize the Future of Tech The story of Lovable, under the astute leadership of Anton Osika, is a testament to the power of innovative ideas meeting flawless execution. Their remarkable journey from concept to a multi-billion-dollar valuation in record time is a compelling narrative for anyone interested in the future of technology. By democratizing software creation through Lovable AI, they are not just building a company; they are fostering a new generation of creators. His appearance at Bitcoin World Disrupt 2025 is an unmissable opportunity to gain direct insights from a leader who is truly shaping the landscape of consumer tech innovation. Don’t miss this chance to learn about cutting-edge startup growth strategies and secure your front-row seat to the future. Register now and save up to $668 before Regular Bird rates end on September 26. To learn more about the latest AI market trends, explore our article on key developments shaping AI features. This post Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025 first appeared on BitcoinWorld.
Share
Coinstats2025/09/17 23:40
RezorEx launch tests Rezor’s execution in crowded exchange market

RezorEx launch tests Rezor’s execution in crowded exchange market

Rezor launches RezorEx, a centralized crypto exchange built alongside its wallet and cross-chain aggregator, framing the platform as a proof of execution, not hype
Share
Crypto.news2026/01/21 20:09