


Остановите войну!
for scientists:


default search action
Shixiang Gu
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2023
- [j2]So Kuroki
, Tatsuya Matsushima
, Jumpei Arima
, Hiroki Furuta
, Yutaka Matsuo
, Shixiang Shane Gu, Yujin Tang
:
Collective Intelligence for 2D Push Manipulations With Mobile Robots. IEEE Robotics Autom. Lett. 8(5): 2820-2827 (2023) - [i49]Kimin Lee, Hao Liu, Moonkyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Shixiang Shane Gu:
Aligning Text-to-Image Models using Human Feedback. CoRR abs/2302.12192 (2023) - [i48]Satoshi Kataoka, Youngseog Chung, Seyed Kamyar Seyed Ghasemipour, Pannag Sanketi, Shixiang Shane Gu, Igor Mordatch:
Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning. CoRR abs/2303.14870 (2023) - [i47]Zihan Ding, Yuanpei Chen, Allen Z. Ren, Shixiang Shane Gu, Hao Dong, Chi Jin:
Learning a Universal Human Prior for Dexterous Manipulation from Human Preference. CoRR abs/2304.04602 (2023) - [i46]Hiroki Furuta, Ofir Nachum, Kuang-Huei Lee, Yutaka Matsuo, Shixiang Shane Gu, Izzeddin Gur:
Multimodal Web Navigation with Instruction-Finetuned Foundation Models. CoRR abs/2305.11854 (2023) - 2022
- [j1]Tatsuya Matsushima
, Yuki Noguchi, Jumpei Arima, Toshiki Aoki, Yuki Okita, Yuya Ikeda, Koki Ishimoto, Shohei Taniguchi, Yuki Yamashita, Shoichi Seto, Shixiang Shane Gu, Yusuke Iwasawa, Yutaka Matsuo:
World robot challenge 2020 - partner robot: a data-driven approach for room tidying with mobile manipulator. Adv. Robotics 36(17-18): 850-869 (2022) - [c38]Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu:
Generalized Decision Transformer for Offline Hindsight Information Matching. ICLR 2022 - [c37]Scott Fujimoto, David Meger, Doina Precup, Ofir Nachum, Shixiang Shane Gu:
Why Should I Trust You, Bellman? The Bellman Error is a Poor Replacement for Value Error. ICML 2022: 6918-6943 - [c36]Seyed Kamyar Seyed Ghasemipour, Satoshi Kataoka, Byron David, Daniel Freeman, Shixiang Shane Gu, Igor Mordatch:
Blocks Assemble! Learning to Assemble with Large-Scale Structured Reinforcement Learning. ICML 2022: 7435-7469 - [c35]Seyed Kamyar Seyed Ghasemipour, Shixiang Shane Gu, Ofir Nachum:
Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters. NeurIPS 2022 - [c34]Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa:
Large Language Models are Zero-Shot Reasoners. NeurIPS 2022 - [i45]Machel Reid, Yutaro Yamada, Shixiang Shane Gu:
Can Wikipedia Help Offline Reinforcement Learning? CoRR abs/2201.12122 (2022) - [i44]Scott Fujimoto, David Meger, Doina Precup, Ofir Nachum, Shixiang Shane Gu:
Why Should I Trust You, Bellman? The Bellman Error is a Poor Replacement for Value Error. CoRR abs/2201.12417 (2022) - [i43]Seyed Kamyar Seyed Ghasemipour, Daniel Freeman, Byron David, Shixiang Gu, Satoshi Kataoka, Igor Mordatch:
Blocks Assemble! Learning to Assemble with Large-Scale Structured Reinforcement Learning. CoRR abs/2203.13733 (2022) - [i42]Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa:
Large Language Models are Zero-Shot Reasoners. CoRR abs/2205.11916 (2022) - [i41]Seyed Kamyar Seyed Ghasemipour, Shixiang Shane Gu, Ofir Nachum:
Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters. CoRR abs/2205.13703 (2022) - [i40]Tatsuya Matsushima, Yuki Noguchi, Jumpei Arima, Toshiki Aoki, Yuki Okita, Yuya Ikeda, Koki Ishimoto, Shohei Taniguchi, Yuki Yamashita, Shoichi Seto, Shixiang Shane Gu, Yusuke Iwasawa, Yutaka Matsuo:
World Robot Challenge 2020 - Partner Robot: A Data-Driven Approach for Room Tidying with Mobile Manipulator. CoRR abs/2207.10106 (2022) - [i39]Naruya Kondo, So Kuroki, Ryosuke Hyakuta, Yutaka Matsuo, Shixiang Shane Gu, Yoichi Ochiai:
Deep Billboards towards Lossless Real2Sim in Virtual Reality. CoRR abs/2208.08861 (2022) - [i38]Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, Andrew M. Dai:
Mind's Eye: Grounded Language Model Reasoning through Simulation. CoRR abs/2210.05359 (2022) - [i37]Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei:
Scaling Instruction-Finetuned Language Models. CoRR abs/2210.11416 (2022) - [i36]Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han:
Large Language Models Can Self-Improve. CoRR abs/2210.11610 (2022) - [i35]Hiroki Furuta, Yusuke Iwasawa, Yutaka Matsuo, Shixiang Shane Gu:
A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation. CoRR abs/2211.14296 (2022) - [i34]So Kuroki, Tatsuya Matsushima, Jumpei Arima, Yutaka Matsuo, Shixiang Shane Gu, Yujin Tang:
Collective Intelligence for Object Manipulation with Mobile Robots. CoRR abs/2211.15136 (2022) - 2021
- [c33]Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, Ofir Nachum, Shixiang Gu:
Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization. ICLR 2021 - [c32]Jongwook Choi, Archit Sharma, Honglak Lee, Sergey Levine, Shixiang Shane Gu:
Variational Empowerment as Representation Learning for Goal-Conditioned Reinforcement Learning. ICML 2021: 1953-1963 - [c31]Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, Shixiang Shane Gu:
Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning. ICML 2021: 3541-3552 - [c30]Seyed Kamyar Seyed Ghasemipour, Dale Schuurmans, Shixiang Shane Gu:
EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL. ICML 2021: 3682-3691 - [c29]Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, Yutaka Matsuo, Shixiang Shane Gu:
Co-Adaptation of Algorithmic and Implementational Innovations in Inference-based Deep Reinforcement Learning. NeurIPS 2021: 9828-9842 - [c28]Scott Fujimoto, Shixiang Shane Gu:
A Minimalist Approach to Offline Reinforcement Learning. NeurIPS 2021: 20132-20145 - [i33]Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, Shixiang Shane Gu:
Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning. CoRR abs/2103.12726 (2021) - [i32]Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, Yutaka Matsuo, Shixiang Shane Gu:
Identifying Co-Adaptation of Algorithmic and Implementational Innovations in Deep Reinforcement Learning: A Taxonomy and Case Study of Inference-based Algorithms. CoRR abs/2103.17258 (2021) - [i31]Jongwook Choi, Archit Sharma, Honglak Lee, Sergey Levine, Shixiang Shane Gu:
Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning. CoRR abs/2106.01404 (2021) - [i30]Scott Fujimoto, Shixiang Shane Gu:
A Minimalist Approach to Offline Reinforcement Learning. CoRR abs/2106.06860 (2021) - [i29]Shixiang Shane Gu, Manfred Diaz, C. Daniel Freeman, Hiroki Furuta, Seyed Kamyar Seyed Ghasemipour, Anton Raichuk, Byron David, Erik Frey, Erwin Coumans, Olivier Bachem:
Braxlines: Fast and Interactive Toolkit for RL-driven Behavior Engineering beyond Reward Maximization. CoRR abs/2110.04686 (2021) - [i28]Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu:
Generalized Decision Transformer for Offline Hindsight Information Matching. CoRR abs/2111.10364 (2021) - [i27]Xin Zhang, Yusuke Iwasawa, Yutaka Matsuo, Shixiang Shane Gu:
Amortized Prompt: Lightweight Fine-Tuning for CLIP in Domain Generalization. CoRR abs/2111.12853 (2021) - [i26]Naruya Kondo, Yuya Ikeda, Andrea Tagliasacchi, Yutaka Matsuo, Yoichi Ochiai, Shixiang Shane Gu:
VaxNeRF: Revisiting the Classic for Voxel-Accelerated Neural Radiance Field. CoRR abs/2111.13112 (2021) - [i25]Yuki Noguchi, Tatsuya Matsushima, Yutaka Matsuo, Shixiang Shane Gu:
Tool as Embodiment for Recursive Manipulation. CoRR abs/2112.00359 (2021) - 2020
- [c27]Natasha Jaques, Judy Hanwen Shen, Asma Ghandeharioun, Craig Ferguson
, Àgata Lapedriza, Noah Jones, Shixiang Gu, Rosalind W. Picard:
Human-centric dialog training via offline reinforcement learning. EMNLP (1) 2020: 3985-4003 - [c26]Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, Karol Hausman:
Dynamics-Aware Unsupervised Discovery of Skills. ICLR 2020 - [c25]Lisa Lee, Ben Eysenbach, Ruslan Salakhutdinov, Shixiang Shane Gu, Chelsea Finn:
Weakly-Supervised Reinforcement Learning for Controllable Behavior. NeurIPS 2020 - [c24]Archit Sharma, Michael Ahn, Sergey Levine, Vikash Kumar, Karol Hausman, Shixiang Gu:
Emergent Real-World Robotic Skills via Unsupervised Off-Policy Reinforcement Learning. Robotics: Science and Systems 2020 - [i24]Lisa Lee
, Benjamin Eysenbach, Ruslan Salakhutdinov, Shixiang Gu, Chelsea Finn:
Weakly-Supervised Reinforcement Learning for Controllable Behavior. CoRR abs/2004.02860 (2020) - [i23]Archit Sharma, Michael Ahn, Sergey Levine, Vikash Kumar, Karol Hausman, Shixiang Gu:
Emergent Real-World Robotic Skills via Unsupervised Off-Policy Reinforcement Learning. CoRR abs/2004.12974 (2020) - [i22]Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, Ofir Nachum, Shixiang Gu:
Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization. CoRR abs/2006.03647 (2020) - [i21]Seyed Kamyar Seyed Ghasemipour, Dale Schuurmans, Shixiang Shane Gu:
EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL. CoRR abs/2007.11091 (2020) - [i20]Natasha Jaques
, Judy Hanwen Shen, Asma Ghandeharioun, Craig Ferguson
, Àgata Lapedriza, Noah Jones, Shixiang Shane Gu, Rosalind W. Picard:
Human-centric Dialog Training via Offline Reinforcement Learning. CoRR abs/2010.05848 (2020)
2010 – 2019
- 2019
- [b1]Shixiang Gu:
Sample-efficient deep reinforcement learning for continuous control. University of Cambridge, UK, 2019 - [c23]Ofir Nachum, Michael Ahn, Hugo Ponte, Shixiang Shane Gu, Vikash Kumar:
Multi-Agent Manipulation via Locomotion using Hierarchical Sim2Real. CoRL 2019: 110-121 - [c22]Seyed Kamyar Seyed Ghasemipour, Richard S. Zemel, Shixiang Gu:
A Divergence Minimization Perspective on Imitation Learning Methods. CoRL 2019: 1259-1277 - [c21]Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine:
Near-Optimal Representation Learning for Hierarchical Reinforcement Learning. ICLR (Poster) 2019 - [c20]George Tucker, Dieterich Lawson, Shixiang Gu, Chris J. Maddison:
Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives. ICLR (Poster) 2019 - [c19]Seyed Kamyar Seyed Ghasemipour, Shixiang Gu, Richard S. Zemel:
SMILe: Scalable Meta Inverse Reinforcement Learning through Context-Conditional Policies. NeurIPS 2019: 7879-7889 - [c18]Yiding Jiang, Shixiang Gu, Kevin Murphy, Chelsea Finn:
Language as an Abstraction for Hierarchical Deep Reinforcement Learning. NeurIPS 2019: 9414-9426 - [i19]Yiding Jiang, Shixiang Gu, Kevin Murphy, Chelsea Finn:
Language as an Abstraction for Hierarchical Deep Reinforcement Learning. CoRR abs/1906.07343 (2019) - [i18]Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson
, Àgata Lapedriza, Noah Jones, Shixiang Gu, Rosalind W. Picard:
Way Off-Policy Batch Deep Reinforcement Learning of Implicit Human Preferences in Dialog. CoRR abs/1907.00456 (2019) - [i17]Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, Karol Hausman:
Dynamics-Aware Unsupervised Discovery of Skills. CoRR abs/1907.01657 (2019) - [i16]Ofir Nachum, Michael Ahn, Hugo Ponte, Shixiang Gu, Vikash Kumar:
Multi-Agent Manipulation via Locomotion using Hierarchical Sim2Real. CoRR abs/1908.05224 (2019) - [i15]Ofir Nachum, Haoran Tang, Xingyu Lu, Shixiang Gu, Honglak Lee, Sergey Levine:
Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning? CoRR abs/1909.10618 (2019) - [i14]Seyed Kamyar Seyed Ghasemipour, Richard S. Zemel, Shixiang Gu:
A Divergence Minimization Perspective on Imitation Learning Methods. CoRR abs/1911.02256 (2019) - 2018
- [c17]Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine:
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning. ICLR (Poster) 2018 - [c16]Vitchyr Pong, Shixiang Gu, Murtaza Dalal, Sergey Levine:
Temporal Difference Models: Model-Free Deep RL for Model-Based Control. ICLR (Poster) 2018 - [c15]George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard E. Turner, Zoubin Ghahramani, Sergey Levine:
The Mirage of Action-Dependent Baselines in Reinforcement Learning. ICLR (Workshop) 2018 - [c14]George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard E. Turner, Zoubin Ghahramani, Sergey Levine:
The Mirage of Action-Dependent Baselines in Reinforcement Learning. ICML 2018: 5022-5031 - [c13]Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine:
Data-Efficient Hierarchical Reinforcement Learning. NeurIPS 2018: 3307-3317 - [i13]Vitchyr Pong, Shixiang Gu, Murtaza Dalal, Sergey Levine:
Temporal Difference Models: Model-Free Deep RL for Model-Based Control. CoRR abs/1802.09081 (2018) - [i12]George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard E. Turner, Zoubin Ghahramani, Sergey Levine:
The Mirage of Action-Dependent Baselines in Reinforcement Learning. CoRR abs/1802.10031 (2018) - [i11]Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine:
Data-Efficient Hierarchical Reinforcement Learning. CoRR abs/1805.08296 (2018) - [i10]Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine:
Near-Optimal Representation Learning for Hierarchical Reinforcement Learning. CoRR abs/1810.01257 (2018) - [i9]George Tucker, Dieterich Lawson, Shixiang Gu, Chris J. Maddison:
Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives. CoRR abs/1810.04152 (2018) - 2017
- [c12]Shixiang Gu, Timothy P. Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine:
Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic. ICLR 2017 - [c11]Eric Jang, Shixiang Gu, Ben Poole:
Categorical Reparameterization with Gumbel-Softmax. ICLR (Poster) 2017 - [c10]Natasha Jaques, Shixiang Gu, Richard E. Turner, Douglas Eck:
Tuning Recurrent Neural Networks with Reinforcement Learning. ICLR (Workshop) 2017 - [c9]Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, José Miguel Hernández-Lobato, Richard E. Turner, Douglas Eck:
Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control. ICML 2017: 1645-1654 - [c8]Shixiang Gu, Ethan Holly, Timothy P. Lillicrap, Sergey Levine:
Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. ICRA 2017: 3389-3396 - [c7]Shixiang Gu, Tim Lillicrap, Richard E. Turner, Zoubin Ghahramani, Bernhard Schölkopf, Sergey Levine:
Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning. NIPS 2017: 3846-3855 - [i8]Shixiang Gu, Timothy P. Lillicrap, Zoubin Ghahramani, Richard E. Turner, Bernhard Schölkopf, Sergey Levine:
Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning. CoRR abs/1706.00387 (2017) - [i7]Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine:
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning. CoRR abs/1711.06782 (2017) - 2016
- [c6]Shixiang Gu, Timothy P. Lillicrap, Ilya Sutskever, Sergey Levine:
Continuous Deep Q-Learning with Model-based Acceleration. ICML 2016: 2829-2838 - [c5]Shixiang Gu, Sergey Levine, Ilya Sutskever, Andriy Mnih:
MuProp: Unbiased Backpropagation for Stochastic Neural Networks. ICLR (Poster) 2016 - [i6]Shixiang Gu, Timothy P. Lillicrap, Ilya Sutskever, Sergey Levine:
Continuous Deep Q-Learning with Model-based Acceleration. CoRR abs/1603.00748 (2016) - [i5]Shixiang Gu, Ethan Holly, Timothy P. Lillicrap, Sergey Levine:
Deep Reinforcement Learning for Robotic Manipulation. CoRR abs/1610.00633 (2016) - [i4]Eric Jang, Shixiang Gu, Ben Poole:
Categorical Reparameterization with Gumbel-Softmax. CoRR abs/1611.01144 (2016) - [i3]Shixiang Gu, Timothy P. Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine:
Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic. CoRR abs/1611.02247 (2016) - [i2]Natasha Jaques, Shixiang Gu, Richard E. Turner, Douglas Eck:
Tuning Recurrent Neural Networks with Reinforcement Learning. CoRR abs/1611.02796 (2016) - 2015
- [c4]Nilesh Tripuraneni, Shixiang Gu, Hong Ge, Zoubin Ghahramani:
Particle Gibbs for Infinite Hidden Markov Models. NIPS 2015: 2395-2403 - [c3]Shixiang Gu, Zoubin Ghahramani, Richard E. Turner:
Neural Adaptive Sequential Monte Carlo. NIPS 2015: 2629-2637 - [c2]Shixiang Gu, Luca Rigazio:
Towards Deep Neural Network Architectures Robust to Adversarial Examples. ICLR (Workshop) 2015 - [i1]Shixiang Gu, Zoubin Ghahramani, Richard E. Turner:
Neural Adaptive Sequential Monte Carlo. CoRR abs/1506.03338 (2015) - 2012
- [c1]Steve Mann, Raymond Chun Hing Lo, Kalin Ovtcharov, Shixiang Gu, David Dai, Calvin Ngan, Tao Ai:
Realtime HDR (High Dynamic Range) video for eyetap wearable computers, FPGA-based seeing aids, and glasseyes (EyeTaps). CCECE 2012: 1-6
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
load content from web.archive.org
Privacy notice: By enabling the option above, your browser will contact the API of web.archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2023-05-26 17:41 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint