


default search action
Tri Dao
Person information
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c35]Tri Dao:
FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning. ICLR 2024 - [c34]Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, Tri Dao:
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads. ICML 2024 - [c33]Tri Dao, Albert Gu:
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality. ICML 2024 - [c32]Yair Schiff, Chia-Hsiang Kao, Aaron Gokaslan, Tri Dao, Albert Gu, Volodymyr Kuleshov:
Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling. ICML 2024 - [c31]James Liu, Guangxuan Xiao, Kai Li, Jason D. Lee, Song Han, Tri Dao, Tianle Cai:
BitDelta: Your Fine-Tune May Only Be Worth One Bit. NeurIPS 2024 - [i40]Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, Tri Dao:
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads. CoRR abs/2401.10774 (2024) - [i39]James Liu, Guangxuan Xiao, Kai Li, Jason D. Lee, Song Han, Tri Dao, Tianle Cai:
BitDelta: Your Fine-Tune May Only Be Worth One Bit. CoRR abs/2402.10193 (2024) - [i38]Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu
, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, Wen-Ding Li, Megan Risdal, Jia Li, Jian Zhu, Terry Yue Zhuo, Evgenii Zheltonozhskii, Nii Osae Osae Dade, Wenhao Yu, Lucas Krauß, Naman Jain, Yixuan Su, Xuanli He, Manan Dey, Edoardo Abati, Yekun Chai, Niklas Muennighoff, Xiangru Tang, Muhtasham Oblokulov, Christopher Akiki, Marc Marone, Chenghao Mou, Mayank Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian J. McAuley, Han Hu, Torsten Scholak, Sébastien Paquet, Jennifer Robinson, Carolyn Jane Anderson, Nicolas Chapados, et al.:
StarCoder 2 and The Stack v2: The Next Generation. CoRR abs/2402.19173 (2024) - [i37]Yair Schiff, Chia-Hsiang Kao, Aaron Gokaslan, Tri Dao, Albert Gu, Volodymyr Kuleshov:
Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling. CoRR abs/2403.03234 (2024) - [i36]Tri Dao, Albert Gu:
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality. CoRR abs/2405.21060 (2024) - [i35]Roger Waleffe, Wonmin Byeon, Duncan Riach, Brandon Norick, Vijay Korthikanti, Tri Dao, Albert Gu, Ali Hatamizadeh, Sudhakar Singh, Deepak Narayanan, Garvit Kulshreshtha, Vartika Singh, Jared Casper, Jan Kautz, Mohammad Shoeybi, Bryan Catanzaro:
An Empirical Study of Mamba-based Language Models. CoRR abs/2406.07887 (2024) - [i34]Jay Shah, Ganesh Bikshandi, Ying Zhang, Vijay Thakkar, Pradeep Ramani, Tri Dao:
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision. CoRR abs/2407.08608 (2024) - [i33]Sukjun Hwang, Aakash Lahoti, Tri Dao, Albert Gu:
Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers. CoRR abs/2407.09941 (2024) - [i32]Junxiong Wang, Daniele Paliotta, Avner May, Alexander M. Rush, Tri Dao:
The Mamba in the Llama: Distilling and Accelerating Hybrid Models. CoRR abs/2408.15237 (2024) - [i31]Maurice Weber, Daniel Y. Fu, Quentin Anthony, Yonatan Oren, Shane Adams, Anton Alexandrov, Xiaozhong Lyu, Huu Nguyen, Xiaozhe Yao, Virginia Adams, Ben Athiwaratkun, Rahul Chalamala, Kezhen Chen, Max Ryabinin, Tri Dao, Percy Liang, Christopher Ré, Irina Rish, Ce Zhang:
RedPajama: an Open Dataset for Training Large Language Models. CoRR abs/2411.12372 (2024) - [i30]Rui Pan, Zhuang Wang, Zhen Jia, Can Karakus, Luca Zancato, Tri Dao, Ravi Netravali, Yida Wang:
Marconi: Prefix Caching for the Era of Hybrid LLMs. CoRR abs/2411.19379 (2024) - 2023
- [j1]Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy V, Jason T. Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de Vries:
StarCoder: may the source be with you! Trans. Mach. Learn. Res. 2023 (2023) - [c30]Daniel Y. Fu, Tri Dao, Khaled Kamal Saab, Armin W. Thomas, Atri Rudra, Christopher Ré:
Hungry Hungry Hippos: Towards Language Modeling with State Space Models. ICLR 2023 - [c29]Michael Zhang, Khaled Kamal Saab, Michael Poli, Tri Dao, Karan Goel, Christopher Ré:
Effectively Modeling Time Series with Simple Discrete State Spaces. ICLR 2023 - [c28]Daniel Y. Fu, Elliot L. Epstein, Eric Nguyen, Armin W. Thomas, Michael Zhang, Tri Dao, Atri Rudra, Christopher Ré:
Simple Hardware-Efficient Long Convolutions for Sequence Modeling. ICML 2023: 10373-10391 - [c27]Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Ré, Beidi Chen:
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time. ICML 2023: 22137-22176 - [c26]Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y. Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, Christopher Ré:
Hyena Hierarchy: Towards Larger Convolutional Language Models. ICML 2023: 28043-28078 - [i29]Daniel Y. Fu, Elliot L. Epstein, Eric Nguyen, Armin W. Thomas, Michael Zhang, Tri Dao, Atri Rudra, Christopher Ré:
Simple Hardware-Efficient Long Convolutions for Sequence Modeling. CoRR abs/2302.06646 (2023) - [i28]Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y. Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, Christopher Ré:
Hyena Hierarchy: Towards Larger Convolutional Language Models. CoRR abs/2302.10866 (2023) - [i27]Michael Zhang, Khaled Kamal Saab, Michael Poli, Tri Dao, Karan Goel, Christopher Ré:
Effectively Modeling Time Series with Simple Discrete State Spaces. CoRR abs/2303.09489 (2023) - [i26]Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin
, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy V, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca
, Manan Dey, Zhihan Zhang, Nour Moustafa-Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de Vries:
StarCoder: may the source be with you! CoRR abs/2305.06161 (2023) - [i25]Tri Dao:
FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning. CoRR abs/2307.08691 (2023) - [i24]Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Ré, Beidi Chen:
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time. CoRR abs/2310.17157 (2023) - [i23]Albert Gu, Tri Dao:
Mamba: Linear-Time Sequence Modeling with Selective State Spaces. CoRR abs/2312.00752 (2023) - 2022
- [c25]Beidi Chen, Tri Dao, Kaizhao Liang, Jiaming Yang, Zhao Song, Atri Rudra, Christopher Ré:
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models. ICLR 2022 - [c24]Tri Dao, Beidi Chen, Nimit Sharad Sohoni, Arjun D. Desai, Michael Poli, Jessica Grogan, Alexander Liu, Aniruddh Rao, Atri Rudra, Christopher Ré:
Monarch: Expressive Structured Matrices for Efficient and Accurate Training. ICML 2022: 4690-4721 - [c23]Chenlin Meng, Linqi Zhou, Kristy Choi, Tri Dao, Stefano Ermon:
ButterflyFlow: Building Invertible Layers with Butterfly Matrices. ICML 2022: 15360-15375 - [c22]Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, Christopher Ré:
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness. NeurIPS 2022 - [c21]Eric Nguyen, Karan Goel, Albert Gu, Gordon W. Downs, Preey Shah, Tri Dao, Stephen Baccus, Christopher Ré:
S4ND: Modeling Images and Videos as Multidimensional Signals with State Spaces. NeurIPS 2022 - [c20]Michael Poli, Stefano Massaroli, Federico Berto, Jinkyoo Park, Tri Dao, Christopher Ré, Stefano Ermon:
Transform Once: Efficient Operator Learning in Frequency Domain. NeurIPS 2022 - [c19]Jue Wang, Binhang Yuan, Luka Rimanic, Yongjun He, Tri Dao, Beidi Chen, Christopher Ré, Ce Zhang:
Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees. NeurIPS 2022 - [c18]Binhang Yuan, Yongjun He, Jared Davis, Tianyi Zhang, Tri Dao, Beidi Chen, Percy Liang, Christopher Ré, Ce Zhang:
Decentralized Training of Foundation Models in Heterogeneous Environments. NeurIPS 2022 - [i22]Tri Dao, Beidi Chen, Nimit Sharad Sohoni, Arjun D. Desai, Michael Poli, Jessica Grogan, Alexander Liu, Aniruddh Rao, Atri Rudra, Christopher Ré:
Monarch: Expressive Structured Matrices for Efficient and Accurate Training. CoRR abs/2204.00595 (2022) - [i21]Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, Christopher Ré:
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness. CoRR abs/2205.14135 (2022) - [i20]Binhang Yuan, Yongjun He, Jared Quincy Davis, Tianyi Zhang, Tri Dao, Beidi Chen, Percy Liang, Christopher Ré, Ce Zhang:
Decentralized Training of Foundation Models in Heterogeneous Environments. CoRR abs/2206.01288 (2022) - [i19]Jue Wang, Binhang Yuan, Luka Rimanic, Yongjun He, Tri Dao, Beidi Chen, Christopher Ré, Ce Zhang:
Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees. CoRR abs/2206.01299 (2022) - [i18]Chenlin Meng, Linqi Zhou, Kristy Choi, Tri Dao, Stefano Ermon:
ButterflyFlow: Building Invertible Layers with Butterfly Matrices. CoRR abs/2209.13774 (2022) - [i17]Eric Nguyen, Karan Goel, Albert Gu, Gordon W. Downs, Preey Shah, Tri Dao, Stephen A. Baccus, Christopher Ré:
S4ND: Modeling Images and Videos as Multidimensional Signals Using State Spaces. CoRR abs/2210.06583 (2022) - [i16]Michael Poli, Stefano Massaroli, Federico Berto, Jinkyoo Park, Tri Dao, Christopher Ré, Stefano Ermon:
Transform Once: Efficient Operator Learning in Frequency Domain. CoRR abs/2211.14453 (2022) - [i15]Tri Dao, Daniel Y. Fu, Khaled Kamal Saab, Armin W. Thomas, Atri Rudra, Christopher Ré:
Hungry Hungry Hippos: Towards Language Modeling with State Space Models. CoRR abs/2212.14052 (2022) - 2021
- [c17]Beidi Chen, Zichang Liu, Binghui Peng, Zhaozhuo Xu, Jonathan Lingjie Li, Tri Dao, Zhao Song, Anshumali Shrivastava, Christopher Ré:
MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training. ICLR 2021 - [c16]Tri Dao, Govinda M. Kamath, Vasilis Syrgkanis, Lester Mackey:
Knowledge Distillation as Semiparametric Inference. ICLR 2021 - [c15]Jared Quincy Davis, Albert Gu, Krzysztof Choromanski, Tri Dao, Christopher Ré, Chelsea Finn, Percy Liang:
Catformer: Designing Stable Transformers via Sensitivity Analysis. ICML 2021: 2489-2499 - [c14]Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, Christopher Ré:
Combining Recurrent, Convolutional, and Continuous-time Models with Linear State Space Layers. NeurIPS 2021: 572-585 - [c13]Nicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher Ré, Ameet Talwalkar:
Rethinking Neural Operations for Diverse Tasks. NeurIPS 2021: 15855-15869 - [c12]Beidi Chen, Tri Dao, Eric Winsor, Zhao Song, Atri Rudra, Christopher Ré:
Scatterbrain: Unifying Sparse and Low-rank Attention. NeurIPS 2021: 17413-17426 - [i14]Nicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher Ré, Ameet Talwalkar:
Rethinking Neural Operations for Diverse Tasks. CoRR abs/2103.15798 (2021) - [i13]Tri Dao, Govinda M. Kamath, Vasilis Syrgkanis, Lester Mackey:
Knowledge Distillation as Semiparametric Inference. CoRR abs/2104.09732 (2021) - [i12]Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, Christopher Ré:
Combining Recurrent, Convolutional, and Continuous-time Models with Linear State-Space Layers. CoRR abs/2110.13985 (2021) - [i11]Beidi Chen, Tri Dao, Eric Winsor, Zhao Song, Atri Rudra, Christopher Ré:
Scatterbrain: Unifying Sparse and Low-rank Attention Approximation. CoRR abs/2110.15343 (2021) - [i10]Beidi Chen, Tri Dao, Kaizhao Liang, Jiaming Yang, Zhao Song, Atri Rudra, Christopher Ré:
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models. CoRR abs/2112.00029 (2021) - 2020
- [c11]Tri Dao, Nimit Sharad Sohoni, Albert Gu, Matthew Eichhorn, Amit Blonder, Megan Leszczynski, Atri Rudra, Christopher Ré:
Kaleidoscope: An Efficient, Learnable Representation For All Structured Linear Maps. ICLR 2020 - [c10]Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, Christopher Ré:
HiPPO: Recurrent Memory with Optimal Polynomial Projections. NeurIPS 2020 - [i9]Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, Christopher Ré:
HiPPO: Recurrent Memory with Optimal Polynomial Projections. CoRR abs/2008.07669 (2020) - [i8]Tri Dao, Nimit Sharad Sohoni, Albert Gu, Matthew Eichhorn, Amit Blonder, Megan Leszczynski, Atri Rudra, Christopher Ré:
Kaleidoscope: An Efficient, Learnable Representation For All Structured Linear Maps. CoRR abs/2012.14966 (2020)
2010 – 2019
- 2019
- [c9]Jian Zhang, Avner May, Tri Dao, Christopher Ré:
Low-Precision Random Fourier Features for Memory-constrained Kernel Approximation. AISTATS 2019: 1264-1274 - [c8]Tri Dao, Albert Gu, Matthew Eichhorn, Atri Rudra, Christopher Ré:
Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations. ICML 2019: 1517-1527 - [c7]Tri Dao, Albert Gu, Alexander Ratner, Virginia Smith, Chris De Sa, Christopher Ré:
A Kernel Theory of Modern Data Augmentation. ICML 2019: 1528-1537 - [c6]Jonathan Kuck, Tri Dao, Hamid Rezatofighi, Ashish Sabharwal, Stefano Ermon:
Approximating the Permanent by Sampling from Adaptive Partitions. NeurIPS 2019: 8858-8869 - [c5]Avner May, Jian Zhang, Tri Dao, Christopher Ré:
On the Downstream Performance of Compressed Word Embeddings. NeurIPS 2019: 11782-11793 - [c4]Jonathan Kuck, Tri Dao, Shenjia Zhao, Burak Bartan, Ashish Sabharwal, Stefano Ermon:
Adaptive Hashing for Model Counting. UAI 2019: 271-280 - [i7]Tri Dao, Albert Gu, Matthew Eichhorn, Atri Rudra, Christopher Ré:
Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations. CoRR abs/1903.05895 (2019) - [i6]Avner May, Jian Zhang, Tri Dao, Christopher Ré:
On the Downstream Performance of Compressed Word Embeddings. CoRR abs/1909.01264 (2019) - [i5]Jonathan Kuck, Tri Dao, Hamid Rezatofighi, Ashish Sabharwal, Stefano Ermon:
Approximating the Permanent by Sampling from Adaptive Partitions. CoRR abs/1911.11856 (2019) - 2018
- [c3]Anna T. Thomas, Albert Gu, Tri Dao, Atri Rudra, Christopher Ré:
Learning Invariance with Compact Transforms. ICLR (Workshop) 2018 - [c2]Anna T. Thomas, Albert Gu, Tri Dao, Atri Rudra, Christopher Ré:
Learning Compressed Transforms with Low Displacement Rank. NeurIPS 2018: 9066-9078 - [i4]Tri Dao, Albert Gu, Alexander J. Ratner, Virginia Smith, Christopher De Sa, Christopher Ré:
A Kernel Theory of Modern Data Augmentation. CoRR abs/1803.06084 (2018) - [i3]Anna T. Thomas, Albert Gu, Tri Dao, Atri Rudra, Christopher Ré:
Learning Compressed Transforms with Low Displacement Rank. CoRR abs/1810.02309 (2018) - [i2]Jian Zhang, Avner May, Tri Dao, Christopher Ré:
Low-Precision Random Fourier Features for Memory-Constrained Kernel Approximation. CoRR abs/1811.00155 (2018) - 2017
- [c1]Tri Dao, Christopher De Sa, Christopher Ré:
Gaussian Quadrature for Kernel Features. NIPS 2017: 6107-6117 - [i1]Tri Dao, Christopher De Sa, Christopher Ré:
Gaussian Quadrature for Kernel Features. CoRR abs/1709.02605 (2017)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-02-07 23:51 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint