


Остановите войну!
for scientists:


default search action
Nicholas Carlini
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2022
- [c40]Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, Nicholas Carlini:
Deduplicating Training Data Makes Language Models Better. ACL (1) 2022: 8424-8445 - [c39]Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini:
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. CCS 2022: 2779-2792 - [c38]David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, Alexey Kurakin:
AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation. ICLR 2022 - [c37]Oliver Bryniarski, Nabeel Hingun, Pedro Pachuca, Vincent Wang, Nicholas Carlini:
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent. ICLR 2022 - [c36]Nicholas Carlini, Andreas Terzis:
Poisoning and Backdooring Contrastive Learning. ICLR 2022 - [c35]Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramèr
:
Data Poisoning Won't Save You From Facial Recognition. ICLR 2022 - [c34]Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, Florian Tramèr
:
Membership Inference Attacks From First Principles. IEEE Symposium on Security and Privacy 2022: 1897-1914 - [i59]Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramèr, Chiyuan Zhang:
Quantifying Memorization Across Neural Language Models. CoRR abs/2202.07646 (2022) - [i58]Florian Tramèr, Andreas Terzis, Thomas Steinke, Shuang Song, Matthew Jagielski, Nicholas Carlini:
Debugging Differential Privacy: A Case Study for Privacy Auditing. CoRR abs/2202.12219 (2022) - [i57]Florian Tramèr
, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini:
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. CoRR abs/2204.00032 (2022) - [i56]Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, Florian Tramèr
:
The Privacy Onion Effect: Memorization is Relative. CoRR abs/2206.10469 (2022) - [i55]Nicholas Carlini, Florian Tramèr
, Krishnamurthy Dvijotham, J. Zico Kolter:
(Certified!!) Adversarial Robustness for Free! CoRR abs/2206.10550 (2022) - [i54]Roland S. Zimmermann, Wieland Brendel, Florian Tramèr
, Nicholas Carlini:
Increasing Confidence in Adversarial Robustness Evaluations. CoRR abs/2206.13991 (2022) - [i53]Matthew Jagielski, Om Thakkar, Florian Tramèr
, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Chiyuan Zhang:
Measuring Forgetting of Memorized Training Examples. CoRR abs/2207.00099 (2022) - [i52]Chawin Sitawarin, Kornrapat Pongmala, Yizheng Chen, Nicholas Carlini, David A. Wagner:
Part-Based Models Improve Adversarial Robustness. CoRR abs/2209.09117 (2022) - [i51]Nicholas Carlini, Vitaly Feldman, Milad Nasr:
No Free Lunch in "Privacy for Free: How does Dataset Condensation Help Privacy". CoRR abs/2209.14987 (2022) - [i50]Chawin Sitawarin, Florian Tramèr, Nicholas Carlini:
Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems. CoRR abs/2210.03297 (2022) - [i49]Daphne Ippolito, Florian Tramèr, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini:
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy. CoRR abs/2210.17546 (2022) - [i48]Florian Tramèr, Gautam Kamath, Nicholas Carlini:
Considerations for Differentially Private Learning with Large-Scale Public Pretraining. CoRR abs/2212.06470 (2022) - [i47]Sanghyun Hong, Nicholas Carlini, Alexey Kurakin:
Publishing Efficient On-device Models Increases Adversarial Vulnerability. CoRR abs/2212.13700 (2022) - 2021
- [c33]Nicholas Carlini:
Session details: Session 1: Adversarial Machine Learning. AISec@CCS 2021 - [c32]Nicholas Carlini:
Session details: Session 2A: Machine Learning for Cybersecurity. AISec@CCS 2021 - [c31]Christopher A. Choquette-Choo, Florian Tramèr
, Nicholas Carlini, Nicolas Papernot:
Label-Only Membership Inference Attacks. ICML 2021: 1964-1974 - [c30]Nicholas Carlini:
How Private is Machine Learning? IH&MMSec 2021: 3 - [c29]Nicholas Carlini, Samuel Deng, Sanjam Garg
, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Abhradeep Thakurta, Florian Tramèr
:
Is Private Learning Possible with Instance Encoding? IEEE Symposium on Security and Privacy 2021: 410-427 - [c28]Milad Nasr, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Nicholas Carlini:
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning. IEEE Symposium on Security and Privacy 2021: 866-882 - [c27]Nicholas Carlini:
Poisoning the Unlabeled Dataset of Semi-Supervised Learning. USENIX Security Symposium 2021: 1577-1592 - [c26]Nicholas Carlini, Florian Tramèr
, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, Colin Raffel:
Extracting Training Data from Large Language Models. USENIX Security Symposium 2021: 2633-2650 - [e2]Nicholas Carlini, Ambra Demontis, Yizheng Chen:
AISec@CCS 2021: Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security, Virtual Event, Republic of Korea, 15 November 2021. ACM 2021, ISBN 978-1-4503-8657-9 [contents] - [i46]Milad Nasr, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Nicholas Carlini:
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning. CoRR abs/2101.04535 (2021) - [i45]Nicholas Carlini:
Poisoning the Unlabeled Dataset of Semi-Supervised Learning. CoRR abs/2105.01622 (2021) - [i44]Sanghyun Hong, Nicholas Carlini, Alexey Kurakin:
Handcrafted Backdoors in Deep Neural Networks. CoRR abs/2106.04690 (2021) - [i43]David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, Alex Kurakin:
AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation. CoRR abs/2106.04732 (2021) - [i42]Nicholas Carlini, Andreas Terzis:
Poisoning and Backdooring Contrastive Learning. CoRR abs/2106.09667 (2021) - [i41]Maura Pintor, Luca Demetrio, Angelo Sotgiu, Giovanni Manca, Ambra Demontis, Nicholas Carlini, Battista Biggio, Fabio Roli:
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples. CoRR abs/2106.09947 (2021) - [i40]Oliver Bryniarski, Nabeel Hingun, Pedro Pachuca, Vincent Wang, Nicholas Carlini:
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent. CoRR abs/2106.15023 (2021) - [i39]Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, Nicholas Carlini:
Deduplicating Training Data Makes Language Models Better. CoRR abs/2107.06499 (2021) - [i38]Nicholas Carlini, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Florian Tramèr:
NeuraCrypt is not private. CoRR abs/2108.07256 (2021) - [i37]Dan Hendrycks, Nicholas Carlini, John Schulman, Jacob Steinhardt:
Unsolved Problems in ML Safety. CoRR abs/2109.13916 (2021) - [i36]Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, Florian Tramèr:
Membership Inference Attacks From First Principles. CoRR abs/2112.03570 (2021) - [i35]Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian Tramèr, Nicholas Carlini:
Counterfactual Memorization in Neural Language Models. CoRR abs/2112.12938 (2021) - 2020
- [c25]Sadia Afroz, Nicholas Carlini, Ambra Demontis
:
AISec'20: 13th Workshop on Artificial Intelligence and Security. CCS 2020: 2143-2144 - [c24]Nicholas Carlini, Matthew Jagielski, Ilya Mironov:
Cryptanalytic Extraction of Neural Network Models. CRYPTO (3) 2020: 189-218 - [c23]Nicholas Carlini, Hany Farid:
Evading Deepfake-Image Detectors with White- and Black-Box Attacks. CVPR Workshops 2020: 2804-2813 - [c22]David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, Colin Raffel:
ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring. ICLR 2020 - [c21]Florian Tramèr
, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, Jörn-Henrik Jacobsen:
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations. ICML 2020: 9561-9571 - [c20]Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin Raffel, Ekin Dogus Cubuk, Alexey Kurakin, Chun-Liang Li:
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence. NeurIPS 2020 - [c19]Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, Ludwig Schmidt:
Measuring Robustness to Natural Distribution Shifts in Image Classification. NeurIPS 2020 - [c18]Florian Tramèr
, Nicholas Carlini, Wieland Brendel, Aleksander Madry:
On Adaptive Attacks to Adversarial Example Defenses. NeurIPS 2020 - [c17]Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot:
High Accuracy and High Fidelity Extraction of Neural Networks. USENIX Security Symposium 2020: 1345-1362 - [i34]Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, Colin Raffel:
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence. CoRR abs/2001.07685 (2020) - [i33]Florian Tramèr, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, Jörn-Henrik Jacobsen:
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations. CoRR abs/2002.04599 (2020) - [i32]Florian Tramèr, Nicholas Carlini, Wieland Brendel, Aleksander Madry:
On Adaptive Attacks to Adversarial Example Defenses. CoRR abs/2002.08347 (2020) - [i31]Nicholas Carlini, Matthew Jagielski, Ilya Mironov:
Cryptanalytic Extraction of Neural Network Models. CoRR abs/2003.04884 (2020) - [i30]Nicholas Carlini, Hany Farid:
Evading Deepfake-Image Detectors with White- and Black-Box Attacks. CoRR abs/2004.00622 (2020) - [i29]Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, Ludwig Schmidt:
Measuring Robustness to Natural Distribution Shifts in Image Classification. CoRR abs/2007.00644 (2020) - [i28]Christopher A. Choquette-Choo, Florian Tramèr, Nicholas Carlini, Nicolas Papernot:
Label-Only Membership Inference Attacks. CoRR abs/2007.14321 (2020) - [i27]Nicholas Carlini:
A Partial Break of the Honeypots Defense to Catch Adversarial Attacks. CoRR abs/2009.10975 (2020) - [i26]Guneet S. Dhillon, Nicholas Carlini:
Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning. CoRR abs/2010.00071 (2020) - [i25]Nicholas Carlini, Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Shuang Song, Abhradeep Thakurta, Florian Tramèr:
An Attack on InstaHide: Is Private Learning Possible with Instance Encoding? CoRR abs/2011.05315 (2020) - [i24]Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, Colin Raffel:
Extracting Training Data from Large Language Models. CoRR abs/2012.07805 (2020)
2010 – 2019
- 2019
- [c16]Sadia Afroz, Battista Biggio, Nicholas Carlini, Yuval Elovici, Asaf Shabtai:
AISec'19: 12th ACM Workshop on Artificial Intelligence and Security. CCS 2019: 2707-2708 - [c15]Justin Gilmer, Nicolas Ford, Nicholas Carlini, Ekin D. Cubuk:
Adversarial Examples Are a Natural Consequence of Test Error in Noise. ICML 2019: 2280-2289 - [c14]Yao Qin, Nicholas Carlini, Garrison W. Cottrell, Ian J. Goodfellow, Colin Raffel:
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition. ICML 2019: 5231-5240 - [c13]David Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel:
MixMatch: A Holistic Approach to Semi-Supervised Learning. NeurIPS 2019: 5050-5060 - [c12]Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, Dawn Song:
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks. USENIX Security Symposium 2019: 267-284 - [e1]Lorenzo Cavallaro, Johannes Kinder, Sadia Afroz, Battista Biggio, Nicholas Carlini, Yuval Elovici, Asaf Shabtai:
Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, AISec@CCS 2019, London, UK, November 15, 2019. ACM 2019, ISBN 978-1-4503-6833-9 [contents] - [i23]Nic Ford, Justin Gilmer, Nicholas Carlini, Ekin Dogus Cubuk:
Adversarial Examples Are a Natural Consequence of Test Error in Noise. CoRR abs/1901.10513 (2019) - [i22]Nicholas Carlini:
Is AmI (Attacks Meet Interpretability) Robust to Adversarial Examples? CoRR abs/1902.02322 (2019) - [i21]Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian J. Goodfellow, Aleksander Madry, Alexey Kurakin:
On Evaluating Adversarial Robustness. CoRR abs/1902.06705 (2019) - [i20]Yao Qin, Nicholas Carlini, Ian J. Goodfellow, Garrison W. Cottrell, Colin Raffel:
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition. CoRR abs/1903.10346 (2019) - [i19]Jörn-Henrik Jacobsen, Jens Behrmann, Nicholas Carlini, Florian Tramèr, Nicolas Papernot:
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness. CoRR abs/1903.10484 (2019) - [i18]Alexander Ratner, Dan Alistarh, Gustavo Alonso, David G. Andersen, Peter Bailis, Sarah Bird, Nicholas Carlini, Bryan Catanzaro, Eric Chung, Bill Dally, Jeff Dean, Inderjit S. Dhillon, Alexandros G. Dimakis, Pradeep Dubey, Charles Elkan, Grigori Fursin, Gregory R. Ganger, Lise Getoor, Phillip B. Gibbons, Garth A. Gibson, Joseph E. Gonzalez, Justin Gottschlich, Song Han, Kim M. Hazelwood, Furong Huang, Martin Jaggi, Kevin G. Jamieson, Michael I. Jordan, Gauri Joshi, Rania Khalaf, Jason Knight, Jakub Konecný, Tim Kraska, Arun Kumar, Anastasios Kyrillidis, Jing Li
, Samuel Madden, H. Brendan McMahan, Erik Meijer, Ioannis Mitliagkas, Rajat Monga, Derek Gordon Murray, Dimitris S. Papailiopoulos, Gennady Pekhimenko, Theodoros Rekatsinas, Afshin Rostamizadeh, Christopher Ré, Christopher De Sa, Hanie Sedghi, Siddhartha Sen, Virginia Smith, Alex Smola, Dawn Song, Evan R. Sparks, Ion Stoica, Vivienne Sze, Madeleine Udell, Joaquin Vanschoren, Shivaram Venkataraman, Rashmi Vinayak, Markus Weimer, Andrew Gordon Wilson, Eric P. Xing, Matei Zaharia, Ce Zhang, Ameet Talwalkar:
SysML: The New Frontier of Machine Learning Systems. CoRR abs/1904.03257 (2019) - [i17]David Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel:
MixMatch: A Holistic Approach to Semi-Supervised Learning. CoRR abs/1905.02249 (2019) - [i16]Nicholas Carlini:
A critique of the DeepSec Platform for Security Analysis of Deep Learning Models. CoRR abs/1905.07112 (2019) - [i15]Steven Chen, Nicholas Carlini, David A. Wagner:
Stateful Detection of Black-Box Adversarial Attacks. CoRR abs/1907.05587 (2019) - [i14]Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot:
High-Fidelity Extraction of Neural Network Models. CoRR abs/1909.01838 (2019) - [i13]Nicholas Carlini, Úlfar Erlingsson, Nicolas Papernot:
Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications. CoRR abs/1910.13427 (2019) - [i12]David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, Colin Raffel:
ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring. CoRR abs/1911.09785 (2019) - 2018
- [b1]Nicholas Carlini:
Evaluation and Design of Robust Neural Network Defenses. University of California, Berkeley, USA, 2018 - [c11]Anish Athalye, Nicholas Carlini, David A. Wagner:
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. ICML 2018: 274-283 - [c10]Nicholas Carlini, David A. Wagner:
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. IEEE Symposium on Security and Privacy Workshops 2018: 1-7 - [i11]Nicholas Carlini, David A. Wagner:
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. CoRR abs/1801.01944 (2018) - [i10]Anish Athalye, Nicholas Carlini, David A. Wagner:
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. CoRR abs/1802.00420 (2018) - [i9]Nicholas Carlini, Chang Liu, Jernej Kos, Úlfar Erlingsson, Dawn Song:
The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets. CoRR abs/1802.08232 (2018) - [i8]Anish Athalye, Nicholas Carlini:
On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses. CoRR abs/1804.03286 (2018) - [i7]Tom B. Brown, Nicholas Carlini, Chiyuan Zhang, Catherine Olsson, Paul F. Christiano, Ian J. Goodfellow:
Unrestricted Adversarial Examples. CoRR abs/1809.08352 (2018) - 2017
- [c9]Nicholas Carlini, David A. Wagner:
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. AISec@CCS 2017: 3-14 - [c8]Nicholas Carlini, David A. Wagner:
Towards Evaluating the Robustness of Neural Networks. IEEE Symposium on Security and Privacy 2017: 39-57 - [c7]Warren He, James Wei, Xinyun Chen, Nicholas Carlini, Dawn Song:
Adversarial Example Defense: Ensembles of Weak Defenses are not Strong. WOOT 2017 - [i6]Nicholas Carlini, David A. Wagner:
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. CoRR abs/1705.07263 (2017) - [i5]Warren He, James Wei, Xinyun Chen, Nicholas Carlini, Dawn Song:
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong. CoRR abs/1706.04701 (2017) - [i4]Nicholas Carlini, Guy Katz, Clark W. Barrett, David L. Dill:
Ground-Truth Adversarial Examples. CoRR abs/1709.10207 (2017) - [i3]Nicholas Carlini, David A. Wagner:
MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples. CoRR abs/1711.08478 (2017) - 2016
- [c6]Nicholas Carlini, Pratyush Mishra, Tavish Vaidya, Yuankai Zhang, Micah Sherr, Clay Shields, David A. Wagner, Wenchao Zhou:
Hidden Voice Commands. USENIX Security Symposium 2016: 513-530 - [i2]Nicholas Carlini, David A. Wagner:
Defensive Distillation is Not Robust to Adversarial Examples. CoRR abs/1607.04311 (2016) - [i1]Nicholas Carlini, David A. Wagner:
Towards Evaluating the Robustness of Neural Networks. CoRR abs/1608.04644 (2016) - 2015
- [c5]Nicholas Carlini, Antonio Barresi, Mathias Payer, David A. Wagner, Thomas R. Gross:
Control-Flow Bending: On the Effectiveness of Control-Flow Integrity. USENIX Security Symposium 2015: 161-176 - 2014
- [c4]Nicholas Carlini, David A. Wagner:
ROP is Still Dangerous: Breaking Modern Defenses. USENIX Security Symposium 2014: 385-399 - 2013
- [c3]Eric Kim, Nicholas Carlini, Andrew Chang, George Yiu, Kai Wang, David A. Wagner:
Improved Support for Machine-assisted Ballot-level Audits. EVT/WOTE 2013 - 2012
- [c2]Nicholas Carlini, Adrienne Porter Felt, David A. Wagner:
An Evaluation of the Google Chrome Extension Security Architecture. USENIX Security Symposium 2012: 97-111 - [c1]Kai Wang, Nicholas Carlini, Eric Kim, Ivan Motyashov, Daniel Nguyen, David A. Wagner:
Operator-Assisted Tabulation of Optical Scan Ballots. EVT/WOTE 2012
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
load content from web.archive.org
Privacy notice: By enabling the option above, your browser will contact the API of web.archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
Tweets on dblp homepage
Show tweets from on the dblp homepage.
Privacy notice: By enabling the option above, your browser will contact twitter.com and twimg.com to load tweets curated by our Twitter account. At the same time, Twitter will persistently store several cookies with your web browser. While we did signal Twitter to not track our users by setting the "dnt" flag, we do not have any control over how Twitter uses your data. So please proceed with care and consider checking the Twitter privacy policy.
last updated on 2023-01-11 21:35 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint