


default search action
Neel Nanda
Person information
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
[j4]Lee Sharkey, Bilal Chughtai, Joshua Batson, Jack Lindsey, Jeffrey Wu, Lucius Bushnaq, Nicholas Goldowsky-Dill, Stefan Heimersheim, Alejandro Ortega, Joseph Isaac Bloom, Stella Biderman, Adrià Garriga-Alonso, Arthur Conmy, Neel Nanda, Jessica Rumbelow, Martin Wattenberg, Nandi Schoots, Joseph Miller, William Saunders, Eric J. Michaud, Stephen Casper, Max Tegmark, David Bau, Eric Todd, Atticus Geiger, Mor Geva, Jesse Hoogland, Daniel Murfet, Tom McGrath:
Open Problems in Mechanistic Interpretability. Trans. Mach. Learn. Res. 2025 (2025)
[c19]Javier Ferrando, Oscar Balcells Obeso, Senthooran Rajamanoharan, Neel Nanda:
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models. ICLR 2025
[c18]Patrick Leask, Bart Bussmann, Michael T. Pearce, Joseph Isaac Bloom, Curt Tigges, Noura Al Moubayed, Lee Sharkey, Neel Nanda:
Sparse Autoencoders Do Not Find Canonical Units of Analysis. ICLR 2025
[c17]Aleksandar Makelov, Georg Lange, Neel Nanda:
Towards Principled Evaluations of Sparse Autoencoders for Interpretability and Control. ICLR 2025
[c16]Bart Bussmann, Noa Nabeshima, Adam Karvonen, Neel Nanda:
Learning Multi-Level Features with Matryoshka Sparse Autoencoders. ICML 2025
[c15]Subhash Kantamneni, Joshua Engels, Senthooran Rajamanoharan, Max Tegmark, Neel Nanda:
Are Sparse Autoencoders Useful? A Case Study in Sparse Probing. ICML 2025
[c14]Adam Karvonen, Can Rager, Johnny Lin, Curt Tigges, Joseph Isaac Bloom, David Chanin, Yeu-Tong Lau, Eoin Farrell, Callum McDougall, Kola Ayonrinde, Demian Till, Matthew Wearden, Arthur Conmy, Samuel Marks, Neel Nanda:
SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability. ICML 2025
[c13]Dmitrii Kharlapenko, Stepan Shabalin, Arthur Conmy, Neel Nanda:
Scaling Sparse Feature Circuits For Studying In-Context Learning. ICML 2025
[c12]Patrick Leask, Neel Nanda, Noura Al Moubayed:
Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models. ICML 2025
[i65]Lee Sharkey, Bilal Chughtai, Joshua Batson, Jack Lindsey, Jeff Wu, Lucius Bushnaq, Nicholas Goldowsky-Dill, Stefan Heimersheim, Alejandro Ortega, Joseph Isaac Bloom, Stella Biderman, Adrià Garriga-Alonso, Arthur Conmy, Neel Nanda, Jessica Rumbelow
, Martin Wattenberg, Nandi Schoots, Joseph Miller, Eric J. Michaud, Stephen Casper, Max Tegmark, William Saunders, David Bau, Eric Todd
, Atticus Geiger, Mor Geva, Jesse Hoogland, Daniel Murfet, Tom McGrath:
Open Problems in Mechanistic Interpretability. CoRR abs/2501.16496 (2025)
[i64]Patrick Leask, Bart Bussmann, Michael T. Pearce, Joseph Isaac Bloom, Curt Tigges, Noura Al Moubayed, Lee Sharkey, Neel Nanda:
Sparse Autoencoders Do Not Find Canonical Units of Analysis. CoRR abs/2502.04878 (2025)
[i63]Subhash Kantamneni, Joshua Engels, Senthooran Rajamanoharan, Max Tegmark, Neel Nanda:
Are Sparse Autoencoders Useful? A Case Study in Sparse Probing. CoRR abs/2502.16681 (2025)
[i62]Iván Arcuschin, Jett Janiak, Robert Krzyzanowski, Senthooran Rajamanoharan, Neel Nanda, Arthur Conmy:
Chain-of-Thought Reasoning In The Wild Is Not Always Faithful. CoRR abs/2503.08679 (2025)
[i61]Adam Karvonen, Can Rager, Johnny Lin, Curt Tigges, Joseph Isaac Bloom, David Chanin
, Yeu-Tong Lau, Eoin Farrell, Callum McDougall, Kola Ayonrinde, Matthew Wearden, Arthur Conmy, Samuel Marks, Neel Nanda:
SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability. CoRR abs/2503.09532 (2025)
[i60]Bart Bussmann, Noa Nabeshima, Adam Karvonen, Neel Nanda:
Learning Multi-Level Features with Matryoshka Sparse Autoencoders. CoRR abs/2503.17547 (2025)
[i59]Rohin Shah, Alex Irpan, Alexander Matt Turner, Anna Wang, Arthur Conmy, David Lindner, Jonah Brown-Cohen, Lewis Ho, Neel Nanda, Raluca Ada Popa, Rishub Jain, Rory Greig, Samuel Albanie, Scott Emmons, Sebastian Farquhar, Sébastien Krier, Senthooran Rajamanoharan, Sophie Bridgers, Tobi Ijitoye, Tom Everitt, Victoria Krakovna, Vikrant Varma, Vladimir Mikulik, Zachary Kenton, Dave Orr, Shane Legg, Noah D. Goodman, Allan Dafoe, Four Flynn, Anca D. Dragan:
An Approach to Technical AGI Safety and Security. CoRR abs/2504.01849 (2025)
[i58]Julian Minder, Clément Dumas, Caden Juang, Bilal Chugtai, Neel Nanda:
Robustly identifying concepts introduced during chat fine-tuning using crosscoders. CoRR abs/2504.02922 (2025)
[i57]Dmitrii Kharlapenko, Stepan Shabalin, Fazl Barez, Arthur Conmy, Neel Nanda:
Scaling sparse feature circuit finding for in-context learning. CoRR abs/2504.13756 (2025)
[i56]Bartosz Cywinski, Emil Ryd, Senthooran Rajamanoharan, Neel Nanda:
Towards eliciting latent knowledge from LLMs with mechanistic interpretability. CoRR abs/2505.14352 (2025)
[i55]Patrick Leask, Neel Nanda, Noura Al Moubayed:
Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models. CoRR abs/2505.17769 (2025)
[i54]Edward Turner, Anna Soligo, Mia Taylor, Senthooran Rajamanoharan, Neel Nanda:
Model Organisms for Emergent Misalignment. CoRR abs/2506.11613 (2025)
[i53]Anna Soligo, Edward Turner, Senthooran Rajamanoharan, Neel Nanda:
Convergent Linear Representations of Emergent Misalignment. CoRR abs/2506.11618 (2025)
[i52]Constantin Venhoff, Ashkan Khakzar, Sonia Joseph, Philip Torr, Neel Nanda:
How Visual Representations Map to Language Feature Space in Multimodal LLMs. CoRR abs/2506.11976 (2025)
[i51]Been Kim, John Hewitt, Neel Nanda, Noah Fiedel, Oyvind Tafjord:
Because we have LLMs, we Can and Should Pursue Agentic Interpretability. CoRR abs/2506.12152 (2025)
[i50]Constantin Venhoff, Iván Arcuschin, Philip Torr, Arthur Conmy, Neel Nanda:
Understanding Reasoning in Thinking Language Models via Steering Vectors. CoRR abs/2506.18167 (2025)
[i49]Paul C. Bogdan, Uzay Macar, Neel Nanda, Arthur Conmy:
Thought Anchors: Which LLM Reasoning Steps Matter? CoRR abs/2506.19143 (2025)
[i48]Atticus Wang, Joshua Engels, Oliver Clive-Griffin, Senthooran Rajamanoharan, Neel Nanda:
Simple Mechanistic Explanations for Out-Of-Context Reasoning. CoRR abs/2507.08218 (2025)
[i47]Tomek Korbak, Mikita Balesni, Elizabeth Barnes, Yoshua Bengio, Joe Benton, Joseph Bloom, Mark Chen, Alan Cooney, Allan Dafoe, Anca D. Dragan, Scott Emmons, Owain Evans, David Farhi, Ryan Greenblatt, Dan Hendrycks, Marius Hobbhahn, Evan Hubinger, Geoffrey Irving, Erik Jenner, Daniel Kokotajlo, Victoria Krakovna, Shane Legg, David Lindner, David Luan, Aleksander Madry, Julian Michael, Neel Nanda, Dave Orr, Jakub Pachocki, Ethan Perez, Mary Phuong, Fabien Roger, Joshua Saxe, Buck Shlegeris, Martín Soto, Eric Steinberger, Jasmine Wang, Wojciech Zaremba, Bowen Baker, Rohin Shah, Vladimir Mikulik:
Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety. CoRR abs/2507.11473 (2025)
[i46]Jake Ward, Chuqiao Lin, Constantin Venhoff, Neel Nanda:
Reasoning-Finetuning Repurposes Latent Representations in Base Models. CoRR abs/2507.12638 (2025)
[i45]Helena Casademunt, Caden Juang, Adam Karvonen, Samuel Marks, Senthooran Rajamanoharan, Neel Nanda:
Steering Out-of-Distribution Generalization with Concept Ablation Fine-Tuning. CoRR abs/2507.16795 (2025)
[i44]Farnoush Rezaei Jafari, Oliver Eberle, Ashkan Khakzar, Neel Nanda:
RelP: Faithful and Efficient Circuit Discovery via Relevance Patching. CoRR abs/2508.21258 (2025)
[i43]Oscar Obeso, Andy Arditi, Javier Ferrando, Joshua Freeman, Cameron Holmes, Neel Nanda:
Real-Time Detection of Hallucinated Entities in Long-Form Generation. CoRR abs/2509.03531 (2025)
[i42]Bartosz Cywinski, Emil Ryd, Rowan Wang, Senthooran Rajamanoharan, Neel Nanda, Arthur Conmy, Samuel Marks:
Eliciting Secret Knowledge from Language Models. CoRR abs/2510.01070 (2025)
[i41]Dmitrii Troitskii, Koyena Pal, Chris Wendler, Callum McDougall, Neel Nanda:
Internal states before wait modulate reasoning patterns. CoRR abs/2510.04128 (2025)
[i40]Constantin Venhoff, Iván Arcuschin, Philip Torr, Arthur Conmy, Neel Nanda:
Base Models Know How to Reason, Thinking Models Learn When. CoRR abs/2510.07364 (2025)
[i39]Julian Minder, Clément Dumas, Stewart Slocum, Helena Casademunt, Cameron Holmes, Robert West, Neel Nanda:
Narrow Finetuning Leaves Clearly Readable Traces in Activation Differences. CoRR abs/2510.13900 (2025)
[i38]Tim Tian Hua, Andrew Qin, Samuel Marks, Neel Nanda:
Steering Evaluation-Aware Language Models to Act Like They Are Deployed. CoRR abs/2510.20487 (2025)
[i37]Uzay Macar, Paul C. Bogdan, Senthooran Rajamanoharan, Neel Nanda:
Thought Branches: Interpreting LLM Reasoning Requires Resampling. CoRR abs/2510.27484 (2025)
[i36]Lewis Smith, Bilal Chughtai, Neel Nanda:
Difficulties with Evaluating a Deception Detector for AIs. CoRR abs/2511.22662 (2025)
[i35]Constantin Venhoff, Ashkan Khakzar, Sonia Joseph, Philip Torr, Neel Nanda:
Too Late to Recall: Explaining the Two-Hop Problem in Multimodal Knowledge Retrieval. CoRR abs/2512.03276 (2025)
[i34]Nick Jiang, Xiaoqing Sun, Lisa Dunlap, Lewis Smith, Neel Nanda:
Interpretable Embeddings with Sparse Autoencoders: A Data Analysis Toolkit. CoRR abs/2512.10092 (2025)- 2024
[j3]Wes Gurnee, Theo Horsley, Zifan Carl Guo, Tara Rezaei Kheirkhah, Qinyi Sun, Will Hathaway, Neel Nanda, Dimitris Bertsimas:
Universal Neurons in GPT2 Language Models. Trans. Mach. Learn. Res. 2024 (2024)
[c11]Aleksandar Makelov, Georg Lange, Atticus Geiger, Neel Nanda:
Is This the Subspace You Are Looking for? An Interpretability Illusion for Subspace Activation Patching. ICLR 2024
[c10]Fred Zhang, Neel Nanda:
Towards Best Practices of Activation Patching in Language Models: Metrics and Methods. ICLR 2024
[c9]Cody Rushing, Neel Nanda:
Explorations of Self-Repair in Language Models. ICML 2024
[c8]Andy Arditi, Oscar Obeso, Aaquib Syed, Daniel Paleka, Nina Panickssery, Wes Gurnee, Neel Nanda:
Refusal in Language Models Is Mediated by a Single Direction. NeurIPS 2024
[c7]Jacob Dunefsky, Philippe Chlenski, Neel Nanda:
Transcoders find interpretable LLM feature circuits. NeurIPS 2024
[c6]Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Tom Lieberum, Vikrant Varma, János Kramár, Rohin Shah, Neel Nanda:
Improving Sparse Decomposition of Language Model Activations with Gated Sparse Autoencoders. NeurIPS 2024
[c5]Alessandro Stolfo, Ben Wu, Wes Gurnee, Yonatan Belinkov, Xingyi Song, Mrinmaya Sachan, Neel Nanda:
Confidence Regulation Neurons in Language Models. NeurIPS 2024
[i33]Wes Gurnee, Theo Horsley, Zifan Carl Guo, Tara Rezaei Kheirkhah, Qinyi Sun, Will Hathaway, Neel Nanda, Dimitris Bertsimas:
Universal Neurons in GPT2 Language Models. CoRR abs/2401.12181 (2024)
[i32]Bilal Chughtai, Alan Cooney, Neel Nanda:
Summing Up the Facts: Additive Mechanisms Behind Factual Recall in LLMs. CoRR abs/2402.07321 (2024)
[i31]Cody Rushing, Neel Nanda:
Explorations of Self-Repair in Language Models. CoRR abs/2402.15390 (2024)
[i30]János Kramár, Tom Lieberum, Rohin Shah, Neel Nanda:
AtP*: An efficient and scalable method for localizing LLM behaviour to components. CoRR abs/2403.00745 (2024)
[i29]Stefan Heimersheim, Neel Nanda:
How to use and interpret activation patching. CoRR abs/2404.15255 (2024)
[i28]Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Tom Lieberum, Vikrant Varma, János Kramár, Rohin Shah, Neel Nanda:
Improving Dictionary Learning with Gated Sparse Autoencoders. CoRR abs/2404.16014 (2024)
[i27]Aleksandar Makelov, Georg Lange, Neel Nanda:
Towards Principled Evaluations of Sparse Autoencoders for Interpretability and Control. CoRR abs/2405.08366 (2024)
[i26]Andy Arditi, Oscar Obeso, Aaquib Syed, Daniel Paleka, Nina Rimsky, Wes Gurnee, Neel Nanda:
Refusal in Language Models Is Mediated by a Single Direction. CoRR abs/2406.11717 (2024)
[i25]Jacob Dunefsky, Philippe Chlenski, Neel Nanda:
Transcoders Find Interpretable LLM Feature Circuits. CoRR abs/2406.11944 (2024)
[i24]Alessandro Stolfo, Ben Wu, Wes Gurnee, Yonatan Belinkov, Xingyi Song
, Mrinmaya Sachan, Neel Nanda:
Confidence Regulation Neurons in Language Models. CoRR abs/2406.16254 (2024)
[i23]Connor Kissane, Robert Krzyzanowski, Joseph Isaac Bloom, Arthur Conmy, Neel Nanda:
Interpreting Attention Layer Outputs with Sparse Autoencoders. CoRR abs/2406.17759 (2024)
[i22]Senthooran Rajamanoharan, Tom Lieberum, Nicolas Sonnerat, Arthur Conmy, Vikrant Varma, János Kramár, Neel Nanda:
Jumping Ahead: Improving Reconstruction Fidelity with JumpReLU Sparse Autoencoders. CoRR abs/2407.14435 (2024)
[i21]Tom Lieberum, Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Nicolas Sonnerat, Vikrant Varma, János Kramár, Anca D. Dragan, Rohin Shah, Neel Nanda:
Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2. CoRR abs/2408.05147 (2024)
[i20]Javier Ferrando, Oscar Obeso, Senthooran Rajamanoharan, Neel Nanda:
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models. CoRR abs/2411.14257 (2024)
[i19]Adam Karvonen, Can Rager, Samuel Marks, Neel Nanda:
Evaluating Sparse Autoencoders on Targeted Concept Erasure Tasks. CoRR abs/2411.18895 (2024)
[i18]Bart Bussmann, Patrick Leask, Neel Nanda:
BatchTopK Sparse Autoencoders. CoRR abs/2412.06410 (2024)- 2023
[j2]Wes Gurnee, Neel Nanda, Matthew Pauly, Katherine Harvey, Dmitrii Troitskii, Dimitris Bertsimas:
Finding Neurons in a Haystack: Case Studies with Sparse Probing. Trans. Mach. Learn. Res. 2023 (2023)
[c4]Neel Nanda, Andrew Lee, Martin Wattenberg:
Emergent Linear Representations in World Models of Self-Supervised Sequence Models. BlackboxNLP@EMNLP 2023: 16-30
[c3]Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, Jacob Steinhardt:
Progress measures for grokking via mechanistic interpretability. ICLR 2023
[c2]Bilal Chughtai, Lawrence Chan, Neel Nanda:
A Toy Model of Universality: Reverse Engineering how Networks Learn Group Operations. ICML 2023: 6243-6267
[i17]Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, Jacob Steinhardt:
Progress measures for grokking via mechanistic interpretability. CoRR abs/2301.05217 (2023)
[i16]Bilal Chughtai, Lawrence Chan, Neel Nanda:
A Toy Model of Universality: Reverse Engineering How Networks Learn Group Operations. CoRR abs/2302.03025 (2023)
[i15]Alex Foote, Neel Nanda, Esben Kran, Ioannis Konstas, Fazl Barez:
N2G: A Scalable Approach for Quantifying Interpretable Neuron Representations in Large Language Models. CoRR abs/2304.12918 (2023)
[i14]Wes Gurnee, Neel Nanda, Matthew Pauly, Katherine Harvey, Dmitrii Troitskii, Dimitris Bertsimas:
Finding Neurons in a Haystack: Case Studies with Sparse Probing. CoRR abs/2305.01610 (2023)
[i13]Alex Foote, Neel Nanda, Esben Kran, Ioannis Konstas, Shay B. Cohen, Fazl Barez:
Neuron to Graph: Interpreting Language Model Neurons at Scale. CoRR abs/2305.19911 (2023)
[i12]Tom Lieberum, Matthew Rahtz, János Kramár, Neel Nanda, Geoffrey Irving, Rohin Shah, Vladimir Mikulik:
Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla. CoRR abs/2307.09458 (2023)
[i11]Neel Nanda, Andrew Lee, Martin Wattenberg:
Emergent Linear Representations in World Models of Self-Supervised Sequence Models. CoRR abs/2309.00941 (2023)
[i10]Fred Zhang, Neel Nanda:
Towards Best Practices of Activation Patching in Language Models: Metrics and Methods. CoRR abs/2309.16042 (2023)
[i9]Callum McDougall, Arthur Conmy, Cody Rushing, Thomas McGrath, Neel Nanda:
Copy Suppression: Comprehensively Understanding an Attention Head. CoRR abs/2310.04625 (2023)
[i8]Curt Tigges, Oskar John Hollinsworth, Atticus Geiger, Neel Nanda:
Linear Representations of Sentiment in Large Language Models. CoRR abs/2310.15154 (2023)
[i7]Lucia Quirke, Lovis Heindrich, Wes Gurnee, Neel Nanda:
Training Dynamics of Contextual N-Grams in Language Models. CoRR abs/2311.00863 (2023)
[i6]Aleksandar Makelov, Georg Lange, Neel Nanda:
Is This the Subspace You Are Looking for? An Interpretability Illusion for Subspace Activation Patching. CoRR abs/2311.17030 (2023)- 2022
[j1]Michael K. Cohen, Marcus Hutter, Neel Nanda:
Fully General Online Imitation Learning. J. Mach. Learn. Res. 23: 334:1-334:30 (2022)
[c1]Deep Ganguli, Danny Hernandez, Liane Lovitt, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Nelson Elhage, Sheer El Showk, Stanislav Fort, Zac Hatfield-Dodds, Tom Henighan, Scott Johnston, Andy Jones, Nicholas Joseph, Jackson Kernian, Shauna Kravec, Ben Mann, Neel Nanda, Kamal Ndousse, Catherine Olsson, Daniela Amodei, Tom B. Brown, Jared Kaplan, Sam McCandlish, Christopher Olah, Dario Amodei, Jack Clark:
Predictability and Surprise in Large Generative Models. FAccT 2022: 1747-1764
[i5]Deep Ganguli, Danny Hernandez, Liane Lovitt, Nova DasSarma, Tom Henighan, Andy Jones, Nicholas Joseph, Jackson Kernion, Benjamin Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Dawn Drain, Nelson Elhage, Sheer El Showk, Stanislav Fort, Zac Hatfield-Dodds, Scott Johnston, Shauna Kravec, Neel Nanda, Kamal Ndousse, Catherine Olsson, Daniela Amodei, Dario Amodei, Tom B. Brown, Jared Kaplan, Sam McCandlish, Chris Olah, Jack Clark:
Predictability and Surprise in Large Generative Models. CoRR abs/2202.07785 (2022)
[i4]Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Chris Olah, Benjamin Mann, Jared Kaplan:
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. CoRR abs/2204.05862 (2022)
[i3]Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Scott Johnston, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, Chris Olah:
In-context Learning and Induction Heads. CoRR abs/2209.11895 (2022)- 2021
[i2]Michael K. Cohen, Marcus Hutter, Neel Nanda:
Fully General Online Imitation Learning. CoRR abs/2102.08686 (2021)
[i1]Neel Nanda, Jonathan Uesato, Sven Gowal:
An Empirical Investigation of Learning from Biased Toxicity Labels. CoRR abs/2110.01577 (2021)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from
to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the
of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from
,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from
and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from
.
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2026-01-25 00:19 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID







