default search action
Amit Dhurandhar
Person information
- affiliation: Thomas J. Watson Research Center, Yorktown Heights, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j16]Vijay Sadashivaiah, Keerthiram Murugesan, Ronny Luss, Pin-Yu Chen, Chris R. Sims, James A. Hendler, Amit Dhurandhar:
To Transfer or Not to Transfer: Suppressing Concepts from Source Representations. Trans. Mach. Learn. Res. 2024 (2024) - [c48]Amit Dhurandhar, Tejaswini Pedapati, Ronny Luss, Soham Dan, Aurélie C. Lozano, Payel Das, Georgios Kollias:
NeuroPrune: A Neuro-inspired Topological Sparse Training Algorithm for Large Language Models. ACL (Findings) 2024: 2416-2430 - [c47]Amit Dhurandhar, Rahul Nair, Moninder Singh, Elizabeth Daly, Karthikeyan Natesan Ramamurthy:
Ranking Large Language Models without Ground Truth. ACL (Findings) 2024: 2431-2452 - [c46]Naiyu Yin, Hanjing Wang, Yue Yu, Tian Gao, Amit Dhurandhar, Qiang Ji:
Integrating Markov Blanket Discovery Into Causal Representation Learning for Domain Generalization. ECCV (10) 2024: 271-288 - [c45]Amit Dhurandhar, Swagatam Haldar, Dennis Wei, Karthikeyan Natesan Ramamurthy:
Trust Regions for Explanations via Black-Box Probabilistic Certification. ICML 2024 - [i51]Amit Dhurandhar, Swagatam Haldar, Dennis Wei, Karthikeyan Natesan Ramamurthy:
Trust Regions for Explanations via Black-Box Probabilistic Certification. CoRR abs/2402.11168 (2024) - [i50]Amit Dhurandhar, Rahul Nair, Moninder Singh, Elizabeth Daly, Karthikeyan Natesan Ramamurthy:
Ranking Large Language Models without Ground Truth. CoRR abs/2402.14860 (2024) - [i49]Lucas Monteiro Paes, Dennis Wei, Hyo Jin Do, Hendrik Strobelt, Ronny Luss, Amit Dhurandhar, Manish Nagireddy, Karthikeyan Natesan Ramamurthy, Prasanna Sattigeri, Werner Geyer, Soumya Ghosh:
Multi-Level Explanations for Generative Language Models. CoRR abs/2403.14459 (2024) - [i48]Amit Dhurandhar, Tejaswini Pedapati, Ronny Luss, Soham Dan, Aurélie C. Lozano, Payel Das, Georgios Kollias:
NeuroPrune: A Neuro-inspired Topological Sparse Training Algorithm for Large Language Models. CoRR abs/2404.01306 (2024) - [i47]Sahil Garg, Anderson Schneider, Anant Raj, Kashif Rasul, Yuriy Nevmyvaka, Sneihil Gopal, Amit Dhurandhar, Guillermo A. Cecchi, Irina Rish:
Deep Generative Sampling in the Dual Divergence Space: A Data-efficient & Interpretative Approach for Generative AI. CoRR abs/2404.07377 (2024) - [i46]Tejaswini Pedapati, Amit Dhurandhar, Soumya Ghosh, Soham Dan, Prasanna Sattigeri:
Large Language Model Confidence Estimation via Black-Box Access. CoRR abs/2406.04370 (2024) - [i45]Ronny Luss, Erik Miehling, Amit Dhurandhar:
CELL your Model: Contrastive Explanation Methods for Large Language Models. CoRR abs/2406.11785 (2024) - [i44]Junfeng Jiao, Saleh Afroogh, Kevin Chen, David Atkinson, Amit Dhurandhar:
The global landscape of academic guidelines for generative AI and Large Language Models. CoRR abs/2406.18842 (2024) - [i43]Bruce W. Lee, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Erik Miehling, Pierre L. Dognin, Manish Nagireddy, Amit Dhurandhar:
Programming Refusal with Conditional Activation Steering. CoRR abs/2409.05907 (2024) - [i42]Tian Gao, Amit Dhurandhar, Karthikeyan Natesan Ramamurthy, Dennis Wei:
Identifying Sub-networks in Neural Networks via Functionally Similar Representations. CoRR abs/2410.16484 (2024) - 2023
- [j15]Travis Greene, Amit Dhurandhar, Galit Shmueli:
Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue. Patterns 4(1): 100652 (2023) - [c44]Ronny Luss, Amit Dhurandhar, Miao Liu:
Local Explanations for Reinforcement Learning. AAAI 2023: 9002-9010 - [c43]Jiajin Zhang, Hanqing Chao, Amit Dhurandhar, Pin-Yu Chen, Ali Tajer, Yangyang Xu, Pingkun Yan:
When Neural Networks Fail to Generalize? A Model Sensitivity Perspective. AAAI 2023: 11219-11227 - [c42]Tim Draws, Karthikeyan Natesan Ramamurthy, Ioana Baldini, Amit Dhurandhar, Inkit Padhi, Benjamin Timmermans, Nava Tintarev:
Explainable Cross-Topic Stance Detection for Search Results. CHIIR 2023: 221-235 - [c41]Brianna Richardson, Prasanna Sattigeri, Dennis Wei, Karthikeyan Natesan Ramamurthy, Kush R. Varshney, Amit Dhurandhar, Juan E. Gilbert:
Add-Remove-or-Relabel: Practitioner-Friendly Bias Mitigation via Influential Fairness. FAccT 2023: 736-752 - [c40]Igor Melnyk, Vijil Chenthamarakshan, Pin-Yu Chen, Payel Das, Amit Dhurandhar, Inkit Padhi, Devleena Das:
Reprogramming Pretrained Language Models for Antibody Sequence Infilling. ICML 2023: 24398-24419 - [c39]Giridhar Ganapavarapu, Sumanta Mukherjee, Natalia Martinez Gil, Kanthi K. Sarpatwar, Amaresh Rajasekharan, Amit Dhurandhar, Vijay Arya, Roman Vaculín:
AI Explainability 360 Toolkit for Time-Series and Industrial Use Cases. KDD 2023: 5777-5778 - [c38]Jiajin Zhang, Hanqing Chao, Amit Dhurandhar, Pin-Yu Chen, Ali Tajer, Yangyang Xu, Pingkun Yan:
Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation. MICCAI (1) 2023: 728-738 - [c37]Amit Dhurandhar, Karthikeyan Natesan Ramamurthy, Kartik Ahuja, Vijay Arya:
Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning. NeurIPS 2023 - [i41]Jiajin Zhang, Hanqing Chao, Amit Dhurandhar, Pin-Yu Chen, Ali Tajer, Yangyang Xu, Pingkun Yan:
Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation. CoRR abs/2309.01207 (2023) - 2022
- [j14]Charvi Rastogi, Yunfeng Zhang, Dennis Wei, Kush R. Varshney, Amit Dhurandhar, Richard Tomsett:
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making. Proc. ACM Hum. Comput. Interact. 6(CSCW1): 83:1-83:22 (2022) - [j13]Sanjoy Dey, Prithwish Chakraborty, Bum Chul Kwon, Amit Dhurandhar, Mohamed F. Ghalwash, Fernando J. Suarez Saiz, Kenney Ng, Daby Sow, Kush R. Varshney, Pablo Meyer:
Human-centered explainability for life sciences, healthcare, and medical informatics. Patterns 3(5): 100493 (2022) - [c36]Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang:
AI Explainability 360: Impact and Design. AAAI 2022: 12651-12657 - [c35]Saneem A. Chemmengath, Amar Prakash Azad, Ronny Luss, Amit Dhurandhar:
Let the CAT out of the bag: Contrastive Attributed explanations for Text. EMNLP 2022: 7190-7206 - [c34]Q. Vera Liao, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, Amit Dhurandhar:
Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI. HCOMP 2022: 147-159 - [c33]Amit Dhurandhar, Tejaswini Pedapati:
Multihop: Leveraging Complex Models to Learn Accurate Simple Models. ICKG 2022: 48-55 - [c32]Keerthiram Murugesan, Vijay Sadashivaiah, Ronny Luss, Karthikeyan Shanmugam, Pin-Yu Chen, Amit Dhurandhar:
Auto-Transfer: Learning to Route Transferable Representations. ICLR 2022 - [c31]Amit Dhurandhar, Karthikeyan Natesan Ramamurthy, Karthikeyan Shanmugam:
Is this the Right Neighborhood? Accurate and Query Efficient Model Agnostic Explanations. NeurIPS 2022 - [c30]Dennis Wei, Rahul Nair, Amit Dhurandhar, Kush R. Varshney, Elizabeth Daly, Moninder Singh:
On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach. NeurIPS 2022 - [i40]Amit Dhurandhar, Karthikeyan Ramamurthy, Kartik Ahuja, Vijay Arya:
Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning. CoRR abs/2201.12143 (2022) - [i39]Keerthiram Murugesan, Vijay Sadashivaiah, Ronny Luss, Karthikeyan Shanmugam, Pin-Yu Chen, Amit Dhurandhar:
Auto-Transfer: Learning to Route Transferrable Representations. CoRR abs/2202.01011 (2022) - [i38]Karthikeyan Natesan Ramamurthy, Amit Dhurandhar, Dennis Wei, Zaid Bin Tariq:
Analogies and Feature Attributions for Model Agnostic Explanation of Similarity Learners. CoRR abs/2202.01153 (2022) - [i37]Ronny Luss, Amit Dhurandhar, Miao Liu:
Local Explanations for Reinforcement Learning. CoRR abs/2202.03597 (2022) - [i36]Q. Vera Liao, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, Amit Dhurandhar:
Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI. CoRR abs/2206.10847 (2022) - [i35]Travis Greene, Amit Dhurandhar, Galit Shmueli:
Atomist or Holist? A Diagnosis and Vision for More Productive Interdisciplinary AI Ethics Dialogue. CoRR abs/2208.09174 (2022) - [i34]Tsuyoshi Idé, Amit Dhurandhar, Jirí Navrátil, Moninder Singh, Naoki Abe:
Anomaly Attribution with Likelihood Compensation. CoRR abs/2208.10679 (2022) - [i33]Shreyas Fadnavis, Amit Dhurandhar, Raquel Norel, Jenna M. Reinen, Carla Agurto, Erica Secchettin, Vittorio Schweiger, Giovanni Perini, Guillermo A. Cecchi:
PainPoints: A Framework for Language-based Detection of Chronic Pain and Expert-Collaborative Text-Summarization. CoRR abs/2209.09814 (2022) - [i32]Igor Melnyk, Vijil Chenthamarakshan, Pin-Yu Chen, Payel Das, Amit Dhurandhar, Inkit Padhi, Devleena Das:
Reprogramming Large Pretrained Language Models for Antibody Sequence Infilling. CoRR abs/2210.07144 (2022) - [i31]Dennis Wei, Rahul Nair, Amit Dhurandhar, Kush R. Varshney, Elizabeth M. Daly, Moninder Singh:
On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach. CoRR abs/2211.01498 (2022) - [i30]Jiajin Zhang, Hanqing Chao, Amit Dhurandhar, Pin-Yu Chen, Ali Tajer, Yangyang Xu, Pingkun Yan:
When Neural Networks Fail to Generalize? A Model Sensitivity Perspective. CoRR abs/2212.00850 (2022) - 2021
- [c29]Tsuyoshi Idé, Amit Dhurandhar, Jirí Navrátil, Moninder Singh, Naoki Abe:
Anomaly Attribution with Likelihood Compensation. AAAI 2021: 4131-4138 - [c28]Kartik Ahuja, Karthikeyan Shanmugam, Amit Dhurandhar:
Linear Regression Games: Convergence Guarantees to Approximate Out-of-Distribution Solutions. AISTATS 2021: 1270-1278 - [c27]Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang:
AI Explainability 360 Toolkit. COMAD/CODS 2021: 376-379 - [c26]Abhin Shah, Kartik Ahuja, Karthikeyan Shanmugam, Dennis Wei, Kush R. Varshney, Amit Dhurandhar:
Treatment Effect Estimation Using Invariant Risk Minimization. ICASSP 2021: 5005-5009 - [c25]Kartik Ahuja, Jun Wang, Amit Dhurandhar, Karthikeyan Shanmugam, Kush R. Varshney:
Empirical or Invariant Risk Minimization? A Sample Complexity Perspective. ICLR 2021 - [c24]Ronny Luss, Pin-Yu Chen, Amit Dhurandhar, Prasanna Sattigeri, Yunfeng Zhang, Karthikeyan Shanmugam, Chun-Chen Tu:
Leveraging Latent Features for Local Explanations. KDD 2021: 1139-1149 - [c23]Isha Puri, Amit Dhurandhar, Tejaswini Pedapati, Karthikeyan Shanmugam, Dennis Wei, Kush R. Varshney:
CoFrNets: Interpretable Neural Architecture Inspired by Continued Fractions. NeurIPS 2021: 21668-21680 - [i29]Abhin Shah, Kartik Ahuja, Karthikeyan Shanmugam, Dennis Wei, Kush R. Varshney, Amit Dhurandhar:
Treatment Effect Estimation using Invariant Risk Minimization. CoRR abs/2103.07788 (2021) - [i28]Ronny Luss, Amit Dhurandhar:
Towards Better Model Understanding with Path-Sufficient Explanations. CoRR abs/2109.06181 (2021) - [i27]Amit Dhurandhar, Tejaswini Pedapati:
Building Accurate Simple Models with Multihop. CoRR abs/2109.06961 (2021) - [i26]Saneem A. Chemmengath, Amar Prakash Azad, Ronny Luss, Amit Dhurandhar:
Let the CAT out of the bag: Contrastive Attributed explanations for Text. CoRR abs/2109.07983 (2021) - [i25]Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang:
AI Explainability 360: Impact and Design. CoRR abs/2109.12151 (2021) - 2020
- [j12]Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang:
AI Explainability 360: An Extensible Toolkit for Understanding Data and Machine Learning Models. J. Mach. Learn. Res. 21: 130:1-130:6 (2020) - [c22]Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang:
AI explainability 360: hands-on tutorial. FAT* 2020: 696 - [c21]Amit Dhurandhar, Karthik S. Gurumoorthy:
Classifier Invariant Approach to Learn from Positive-Unlabeled Data. ICDM 2020: 102-111 - [c20]Kartik Ahuja, Karthikeyan Shanmugam, Kush R. Varshney, Amit Dhurandhar:
Invariant Risk Minimization Games. ICML 2020: 145-155 - [c19]Amit Dhurandhar, Karthikeyan Shanmugam, Ronny Luss:
Enhancing Simple Models by Exploiting What They Already Know. ICML 2020: 2525-2534 - [c18]Prithwish Chakraborty, Bum Chul Kwon, Sanjoy Dey, Amit Dhurandhar, Daniel M. Gruen, Kenney Ng, Daby Sow, Kush R. Varshney:
Tutorial on Human-Centered Explainability for Healthcare. KDD 2020: 3547-3548 - [c17]Tejaswini Pedapati, Avinash Balakrishnan, Karthikeyan Shanmugam, Amit Dhurandhar:
Learning Global Transparent Models consistent with Local Contrastive Explanations. NeurIPS 2020 - [c16]Karthikeyan Natesan Ramamurthy, Bhanukiran Vinzamuri, Yunfeng Zhang, Amit Dhurandhar:
Model Agnostic Multilevel Explanations. NeurIPS 2020 - [i24]Kartik Ahuja, Karthikeyan Shanmugam, Kush R. Varshney, Amit Dhurandhar:
Invariant Risk Minimization Games. CoRR abs/2002.04692 (2020) - [i23]Tejaswini Pedapati, Avinash Balakrishnan, Karthikeyan Shanmugam, Amit Dhurandhar:
Learning Global Transparent Models from Local Contrastive Explanations. CoRR abs/2002.08247 (2020) - [i22]Karthikeyan Natesan Ramamurthy, Bhanukiran Vinzamuri, Yunfeng Zhang, Amit Dhurandhar:
Model Agnostic Multilevel Explanations. CoRR abs/2003.06005 (2020) - [i21]Charvi Rastogi, Yunfeng Zhang, Dennis Wei, Kush R. Varshney, Amit Dhurandhar, Richard Tomsett:
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making. CoRR abs/2010.07938 (2020) - [i20]Kartik Ahuja, Karthikeyan Shanmugam, Amit Dhurandhar:
Linear Regression Games: Convergence Guarantees to Approximate Out-of-Distribution Solutions. CoRR abs/2010.15234 (2020) - [i19]Kartik Ahuja, Jun Wang, Amit Dhurandhar, Karthikeyan Shanmugam, Kush R. Varshney:
Empirical or Invariant Risk Minimization? A Sample Complexity Perspective. CoRR abs/2010.16412 (2020) - [i18]Kartik Ahuja, Amit Dhurandhar, Kush R. Varshney:
Learning to Initialize Gradient Descent Using Gradient Descent. CoRR abs/2012.12141 (2020)
2010 – 2019
- 2019
- [c15]Michael Hind, Dennis Wei, Murray Campbell, Noel C. F. Codella, Amit Dhurandhar, Aleksandra Mojsilovic, Karthikeyan Natesan Ramamurthy, Kush R. Varshney:
TED: Teaching AI to Explain its Decisions. AIES 2019: 123-129 - [c14]Karthik S. Gurumoorthy, Amit Dhurandhar, Guillermo A. Cecchi, Charu C. Aggarwal:
Efficient Data Representation by Selecting Prototypes with Importance Weights. ICDM 2019: 260-269 - [i17]Ronny Luss, Pin-Yu Chen, Amit Dhurandhar, Prasanna Sattigeri, Karthikeyan Shanmugam, Chun-Chen Tu:
Generating Contrastive Explanations with Monotonic Attribute Functions. CoRR abs/1905.12698 (2019) - [i16]Amit Dhurandhar, Karthikeyan Shanmugam, Ronny Luss:
Leveraging Simple Model Predictions for Enhancing its Performance. CoRR abs/1905.13565 (2019) - [i15]Amit Dhurandhar, Tejaswini Pedapati, Avinash Balakrishnan, Pin-Yu Chen, Karthikeyan Shanmugam, Ruchir Puri:
Model Agnostic Contrastive Explanations for Structured Data. CoRR abs/1906.00117 (2019) - [i14]Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilovic:
Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning. CoRR abs/1906.02299 (2019) - [i13]Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang:
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. CoRR abs/1909.03012 (2019) - 2018
- [c13]Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Pai-Shun Ting, Karthikeyan Shanmugam, Payel Das:
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives. NeurIPS 2018: 590-601 - [c12]Amit Dhurandhar, Karthikeyan Shanmugam, Ronny Luss, Peder A. Olsen:
Improving Simple Models with Confidence Profiles. NeurIPS 2018: 10317-10327 - [i12]Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Pai-Shun Ting, Karthikeyan Shanmugam, Payel Das:
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives. CoRR abs/1802.07623 (2018) - [i11]Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilovic:
Teaching Meaningful Explanations. CoRR abs/1805.11648 (2018) - [i10]Amit Dhurandhar, Karthikeyan Shanmugam, Ronny Luss, Peder A. Olsen:
Improving Simple Models with Confidence Profiles. CoRR abs/1807.07506 (2018) - [i9]Karthik S. Gurumoorthy, Amit Dhurandhar:
Streaming Methods for Restricted Strongly Convex Functions with Applications to Prototype Selection. CoRR abs/1807.08091 (2018) - [i8]Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilovic:
TED: Teaching AI to Explain its Decisions. CoRR abs/1811.04896 (2018) - 2017
- [j11]Kien Pham, Prasanna Sattigeri, Amit Dhurandhar, A. C. Jacob, M. Vukovic, P. Chataigner, Juliana Freire, Aleksandra Mojsilovic, Kush R. Varshney:
Real-time understanding of humanitarian crises via targeted information retrieval. IBM J. Res. Dev. 61(6): 7:1-7:12 (2017) - [j10]Tsuyoshi Idé, Amit Dhurandhar:
Supervised item response models for informative prediction. Knowl. Inf. Syst. 51(1): 235-257 (2017) - [c11]Amit Dhurandhar, Margareta Ackerman, Xiang Wang:
Uncovering Group Level Insights with Accordant Clustering. SDM 2017: 228-236 - [i7]Amit Dhurandhar, Margareta Ackerman, Xiang Wang:
Uncovering Group Level Insights with Accordant Clustering. CoRR abs/1704.02378 (2017) - [i6]Amit Dhurandhar, Steve Hanneke, Liu Yang:
Learning with Changing Features. CoRR abs/1705.00219 (2017) - [i5]Amit Dhurandhar, Vijay S. Iyengar, Ronny Luss, Karthikeyan Shanmugam:
TIP: Typifying the Interpretability of Procedures. CoRR abs/1706.02952 (2017) - [i4]Karthik S. Gurumoorthy, Amit Dhurandhar, Guillermo A. Cecchi:
ProtoDash: Fast Interpretable Prototype Selection. CoRR abs/1707.01212 (2017) - [i3]Amit Dhurandhar, Vijay S. Iyengar, Ronny Luss, Karthikeyan Shanmugam:
A Formal Framework to Characterize Interpretability of Procedures. CoRR abs/1707.03886 (2017) - 2016
- [j9]Sholom M. Weiss, Amit Dhurandhar, Robert J. Baseman, Brian F. White, Ronald Logan, Jonathan K. Winslow, Daniel J. Poindexter:
Continuous prediction of manufacturing performance throughout the production lifecycle. J. Intell. Manuf. 27(4): 751-763 (2016) - [i2]Amit Dhurandhar, Sechan Oh, Marek Petrik:
Building an Interpretable Recommender via Loss-Preserving Transformation. CoRR abs/1606.05819 (2016) - 2015
- [j8]Amit Dhurandhar:
Bounds on the moments for an ensemble of random decision trees. Knowl. Inf. Syst. 44(2): 279-298 (2015) - [j7]Amit Dhurandhar, Karthik Sankaranarayanan:
Improving classification performance through selective instance completion. Mach. Learn. 100(2-3): 425-447 (2015) - [c10]Amit Dhurandhar, Rajesh Kumar Ravi, Bruce Graves, Gopikrishnan Maniachari, Markus Ettl:
Robust System for Identifying Procurement Fraud. AAAI 2015: 3896-3903 - [c9]Tsuyoshi Idé, Amit Dhurandhar:
Informative Prediction Based on Ordinal Questionnaire Data. ICDM 2015: 191-200 - [c8]Amit Dhurandhar, Bruce Graves, Rajesh Kumar Ravi, Gopikrishnan Maniachari, Markus Ettl:
Big Data System for Analyzing Risky Procurement Entities. KDD 2015: 1741-1750 - 2014
- [j6]Amit Dhurandhar, Marek Petrik:
Efficient and accurate methods for updating generalized linear models with multiple feature additions. J. Mach. Learn. Res. 15(1): 2607-2627 (2014) - [i1]Amit Dhurandhar, Karthik S. Gurumoorthy:
Symmetric Submodular Clustering with Actionable Constraint. CoRR abs/1409.6967 (2014) - 2013
- [j5]Amit Dhurandhar:
Using coarse information for real valued prediction. Data Min. Knowl. Discov. 27(2): 167-192 (2013) - [j4]Amit Dhurandhar, Jun Wang:
Single Network Relational Transductive Learning. J. Artif. Intell. Res. 48: 813-839 (2013) - [j3]Amit Dhurandhar, Alin Dobra:
Probabilistic characterization of nearest neighbor classifier. Int. J. Mach. Learn. Cybern. 4(4): 259-272 (2013) - [c7]Karthik Sankaranarayanan, Amit Dhurandhar:
Intelligently querying incomplete instances for improving classification performance. CIKM 2013: 2169-2178 - [c6]Sholom M. Weiss, Amit Dhurandhar, Robert J. Baseman:
Improving quality control by early prediction of manufacturing outcomes. KDD 2013: 1258-1266 - 2012
- [j2]Amit Dhurandhar, Alin Dobra:
Distribution-free bounds for relational classification. Knowl. Inf. Syst. 31(1): 55-78 (2012) - 2011
- [c5]Pawan Chowdhary, Markus Ettl, Amit Dhurandhar, Soumyadip Ghosh, Gopikrishnan Maniachari, Bruce Graves, Bill Schaefer, Yu Tang:
Managing Procurement Spend Using Advanced Compliance Analytics. ICEBE 2011: 139-144 - [c4]Amit Dhurandhar:
Improving predictions using aggregate information. KDD 2011: 1118-1126 - 2010
- [c3]Amit Dhurandhar:
Learning Maximum Lag for Grouped Graphical Granger Models. ICDM Workshops 2010: 217-224 - [c2]Amit Dhurandhar:
Multi-step Time Series Prediction in Complex Instrumented Domains. ICDM Workshops 2010: 1312-1319
2000 – 2009
- 2009
- [j1]Amit Dhurandhar, Alin Dobra:
Semi-analytical method for analyzing models and model selection measures based on moment analysis. ACM Trans. Knowl. Discov. Data 3(1): 2:1-2:51 (2009) - 2005
- [c1]Amit Dhurandhar, Kartik Shankarnarayanan, Rakesh Jawale:
Robust Pattern Recognition Scheme for Devanagari Script. CIS (1) 2005: 1021-1026