


default search action
SIGGRAPH Asia 2025 Conference Papers: Hong Kong
- Taku Komura, Michael Wimmer, Hongbo Fu:

Proceedings of the SIGGRAPH Asia 2025 Conference Papers, SA Conference Papers 2025, Hong Kong, December 15-18, 2025. ACM 2025, ISBN 979-8-4007-2137-3
3D Reconstruction & Intelligent Geometry
- Fangzhou Gao, Yuzhen Kang, Lianghao Zhang, Li Wang, Qishen Wang, Jiawan Zhang:

RCTrans: Transparent Object Reconstruction in Natural Scene via Refractive Correspondence Estimation. 1:1-1:11 - Yuheng Jiang, Chengcheng Guo, Yize Wu, Yu Hong, Shengkun Zhu, Zhehao Shen, Yingliang Zhang, Shaohui Jiao, Zhuo Su, Lan Xu, Marc Habermann, Christian Theobalt:

Topology-Aware Optimization of Gaussian Primitives for Human-Centric Volumetric Videos. 2:1-2:12 - Lukas Uzolas, Elmar Eisemann, Petr Kellnhofer:

Surface-Aware Distilled 3D Semantic Features. 3:1-3:12 - Shai Krakovsky, Gal Fiebelman, Sagie Benaim, Hadar Averbuch-Elor:

Lang3D-XL: Language Embedded 3D Gaussians for Large-scale Scenes. 4:1-4:11
Dynamic Generative Video: From Synthesis to Real-Time Editing
- Shai Yehezkel, Omer Dahary, Andrey Voynov, Daniel Cohen-Or:

Navigating with Annealing Guidance Scale in Diffusion Space. 5:1-5:11 - Beijia Lu, Ziyi Chen, Jing Xiao, Jun-Yan Zhu:

Input-Aware Sparse Attention for Real-Time Co-Speech Video Generation. 6:1-6:11 - Dvir Samuel, Matan Levy, Nir Darshan, Gal Chechik, Rami Ben-Ari:

OmnimatteZero: Fast Training-free Omnimatte with Pre-trained Video Diffusion Models. 7:1-7:11 - Shrisha Bharadwaj, Haiwen Feng, Giorgio Becherini, Victoria Fernández Abrevaya, Michael J. Black:

GenLit: Reformulating Single-Image Relighting as Video Generation. 8:1-8:12 - Danah Yatim, Rafail Fridman, Omer Bar-Tal, Tali Dekel:

DynVFX: Augmenting Real Videos with Dynamic Content. 9:1-9:12
Global Illumination & Real-Time Rendering
- Rui Su, Honghao Dong, Haojie Jin, Yisong Chen, Guoping Wang, Sheng Li:

Vertex Features for Neural Global Illumination. 10:1-10:11 - Hongtao Sheng, Yuchi Huo, Chuankun Zheng, Guangzhi Han, Yifan Peng, Shi Li, Bin Zang, Hao Zhu, Rui Tang, Yiming Wu, Rui Wang, Hujun Bao:

NeLiF: Neural Lighting Function Generation for Real-Time Indoor Rendering. 11:1-11:11 - Zheng Zeng, Markus Kettunen, Chris Wyman, Lifan Wu, Ravi Ramamoorthi, Ling-Qi Yan, Daqi Lin:

ReSTIR PG: Path Guiding with Spatiotemporally Resampled Paths. 12:1-12:11 - Pengpei Hong, Meng Duan, Beibei Wang, Cem Yuksel, Tizian Zeltner, Daqi Lin:

Sample Space Partitioning and Spatiotemporal Resampling for Specular Manifold Sampling. 13:1-13:10
High-Performance Simulation Algorithms
- Elie Diaz, Jerry Hsu, Eisen Montalvo-Ruiz, Chris Giles, Cem Yuksel:

Implicit Position-Based Fluids. 14:1-14:9 - Zhaoning Wang, Xinyue Wei, Ruoxi Shi, Xiaoshuai Zhang, Hao Su, Minghua Liu:

PartUV: Part-Based UV Unwrapping of 3D Meshes. 15:1-15:12 - Siqi Wang, Janos Meny, Izak Grguric, Mehdi Rahimzadeh, Denis Zorin, Daniele Panozzo, Hsueh-Ti Derek Liu:

Solid-Shell Labeling for Discrete Surfaces. 16:1-16:9 - Anandhu Sureshkumar, Amal Dev Parakkat, Georges-Pierre Bonneau, Stefanie Hahmann, Marie-Paule Cani:

RibbonSculpt: Voronoi Ball based 3D Sculpting from Sparse VR Ribbons. 17:1-17:11
Camera Control and Directed Storytelling in Video Generation
- Yuhao Liu, Tengfei Wang, Fang Liu, Zhenwei Wang, Rynson W. H. Lau:

Shape-for-Motion: Precise and Consistent Video Editing With 3D Proxy. 18:1-18:12 - Jiwen Yu, Jianhong Bai, Yiran Qin, Quande Liu, Xintao Wang, Pengfei Wan, Di Zhang, Xihui Liu:

Context as Memory: Scene-Consistent Interactive Long Video Generation with Memory Retrieval. 19:1-19:11 - Yawen Luo, Xiaoyu Shi, Jianhong Bai, Menghan Xia, Tianfan Xue, Xintao Wang, Pengfei Wan, Di Zhang, Kun Gai:

CamCloneMaster: Enabling Reference-based Camera Control for Video Generation. 20:1-20:10 - Chenjie Cao, Jingkai Zhou, Shikai Li, Jingyun Liang, Chaohui Yu, Fan Wang, Xiangyang Xue, Yanwei Fu:

Uni3C: Unifying Precisely 3D-Enhanced Camera and Human Motion Controls for Video Generation. 21:1-21:12 - Jingwen He, Hongbo Liu, Jiajun Li, Ziqi Huang, Yu Qiao, Wanli Ouyang, Ziwei Liu:

Cut2Next: Generating Next Shot via In-Context Tuning. 22:1-22:11 - Chenhao Ji, Chaohui Yu, Junyao Gao, Fan Wang, Cairong Zhao:

CamPVG: Camera-Controlled Panoramic Video Generation with Epipolar-Aware Diffusion. 23:1-23:12
Material & Texture Modeling
- Pengfei Shen, Feifan Qu, Li Liao, Ruizhen Hu, Yifan Peng:

S3 Imagery: Specular Shading from Scratch-Anisotropy. 24:1-24:11 - Ze Yuan, Xin Yu, Yang-Tian Sun, Yuan-Chen Guo, Yan-Pei Cao, Ding Liang, Xiaojuan Qi:

SeqTex: Generate Mesh Textures in Video Sequence. 25:1-25:12
Neural & Implicit Representations for Geometry and Physics
- Mengfei Liu, Yue Chang, Zhecheng Wang, Peter Yichen Chen, Eitan Grinspun:

Precise Gradient Discontinuities in Neural Fields for Subspace Physics. 26:1-26:11 - Yutao Zhang, Stephanie Wang, Mikhail Bessmeltsev:

Variational Neural Surfacing of 3D Sketches. 27:1-27:12 - Yibo Liu, Zhixin Fang, Sune Darkner, Noam Aigerman, Kenny Erleben, Paul Kry, Teseo Schneider:

Neural Kinematic Bases for Fluids. 28:1-28:10
Creating Digital Humans
- Yuxuan Xue, Xianghui Xie, Margaret Kostyrko, Gerard Pons-Moll:

InfiniHuman: Realistic 3D Human Creation with Precise Control. 29:1-29:12 - Chao Shi, Shenghao Jia, Jinhui Liu, Yong Zhang, Liangchao Zhu, Zhonglei Yang, Jinze Ma, Chaoyue Niu, Chengfei Lv:

HRM^2Avatar: High-Fidelity Real-Time Mobile Avatars from Monocular Phone Scans. 30:1-30:12 - Tianjian Jiang, Hsuan-I Ho, Manuel Kaufmann, Jie Song:

PriorAvatar: Efficient and Robust Avatar Creation from Monocular Video Using Learned Priors. 31:1-31:10 - Xuan Gao, Jingtao Zhou, Dongyu Liu, Yuqi Zhou, Juyong Zhang:

Constructing Diffusion Avatar with Learnable Embeddings. 32:1-32:13 - Jie Yang, Bo-Tao Zhang, Feng-Lin Liu, Hongbo Fu, Yu-Kun Lai, Lin Gao:

Single-Image 3D Human Reconstruction with 3D-Aware Diffusion Priors and Facial Enhancement. 33:1-33:13
Visibility & Real-Time Rendering
- Xiangyu Wang, Thomas Köhler, Jun Lin Qiu, Shohei Mori, Markus Steinberger, Dieter Schmalstieg:

NeuralPVS: Learned Estimation of Potentially Visible Sets. 34:1-34:11 - Jun-Hao Wang, Yi-Yang Tian, Baoquan Chen, Peng-Shuai Wang:

Neural Visibility of Point Sets. 35:1-35:11 - Sebastian Künzel, Sergej Geringer, Quynh Quang Ngo, Philip Voglreiter, Daniel Weiskopf, Dieter Schmalstieg:

Potentially Visible Set Generation with the Disocclusion Buffer. 36:1-36:12
Physically Based Simulation & Dynamic Environments
- Pradyumna Yalandur Muralidhar, Yuxuan Xue, Xianghui Xie, Margaret Kostyrko, Gerard Pons-Moll:

PhySIC: Physically Plausible 3D Human-Scene Interaction and Contact from a Single Image. 37:1-37:12 - Minkwan Kim, Yoonsang Lee:

FreeMusco: Motion-Free Learning of Latent Control for Morphology-Adaptive Locomotion in Musculoskeletal Characters. 38:1-38:11 - Jie Chen, Zherong Pan, Bo Ren:

Fast & Stable Control of Coupled Solid-Fluid Dynamic Systems. 39:1-39:12
Computational Design & Fabricability
- Hao Xu, Yuqing Zhang, Yiqian Wu, Xinyang Zheng, Yutao Liu, Xiangjun Tang, Yunhan Yang, Ding Liang, Yingtian Liu, Yuanchen Guo, Yanpei Cao, Xiaogang Jin:

LegoACE: Autoregressive Construction Engine for Expressive LEGO® Assemblies. 40:1-40:11 - David Cha, Oded Stein:

Computational Design of Shape-Aware Sieves. 41:1-41:11 - Anna Maria Eggler, Nico Pietroni, Pengbin Tang, Michal Piovarci, Bernd Bickel:

Designing with Tension: Nearly-Developable Patch Layouts. 42:1-42:11 - Rulin Chen, Xuyang Ma, Praveer Tewari, Chi-Wing Fu, Peng Song:

Inverse Tiling of 2D Finite Domains. 43:1-43:11
Computational Photography & Cameras
- Yancheng Cai, Robert Wanat, Rafal K. Mantiuk:

CameraVDP: Perceptual Display Assessment with Uncertainty Estimation via Camera and Visual Difference Prediction. 44:1-44:10 - Yiyang Wang, Xi Chen, Xiaogang Xu, Yu Liu, Hengshuang Zhao:

DiffCamera: Arbitrary Refocusing on Images. 45:1-45:10 - SaiKiran Kumar Tedla, Zhoutong Zhang, Xuaner Zhang, Shumian Xin:

Learning to Refocus with Video Diffusion Models. 46:1-46:11 - Arjun Teh, Delio Vicini, Bernd Bickel, Ioannis Gkioulekas, Matthew O'Toole:

Automated design of compound lenses with discrete-continuous optimization. 47:1-47:11 - Jingwei Ma, Vivek Jayaram, Brian Curless, Ira Kemelmacher-Shlizerman, Steven M. Seitz:

UltraZoom: Generating Gigapixel Images from Regular Photos. 48:1-48:10
Sampling, Reconstruction & Variance Reduction
- Andrew Tinits, Stephen Mann:

Nonlinear Noise2Noise for Efficient Monte Carlo Denoiser Training. 49:1-49:11 - Hiroyuki Sakai, Christian Freude, Michael Wimmer, David Hahn:

Statistical Error Reduction for Monte Carlo Rendering. 50:1-50:12
Audio-Driven Facial and Portrait Animation
- Jiye Lee, Chenghui Li, Linh Tran, Shih-En Wei, Jason M. Saragih, Alexander Richard, Hanbyul Joo, Shaojie Bai:

Audio Driven Real-Time Facial Animation for Social Telepresence. 51:1-51:12 - Xin Lu, Chuanqing Zhuang, Chenxi Jin, Zhengda Lu, Yiqun Wang, Wu Liu, Jun Xiao:

LSF-Animation: Label-Free Speech-Driven Facial Animation via Implicit Feature Representation. 52:1-52:12 - Jiahao Cui, Baoyou Chen, Mingwang Xu, Hanlin Shang, Yuxuan Chen, Qinkun Su, Zilong Dong, Yao Yao, Jingdong Wang, Siyu Zhu:

High-Fidelity Dynamic Portrait Animation via Direct Preference Optimization and Temporal Motion Modulation. 53:1-53:10 - Kartik Teotia, Helge Rhodin, Mohit Mendiratta, Hyeongwoo Kim, Marc Habermann, Christian Theobalt:

Audio Driven Universal Gaussian Head Avatars. 54:1-54:12 - Xuangeng Chu, Nabarun Goswami, Ziteng Cui, Hanqin Wang, Tatsuya Harada:

ARTalk: Speech-Driven 3D Head Animation via Autoregressive Model. 55:1-55:9 - Chenxu Zhang, Zenan Li, Hongyi Xu, You Xie, Xiaochen Zhao, Tianpei Gu, Guoxian Song, Xin Chen, Chao Liang, Jianwen Jiang, Linjie Luo:

X-Actor: Emotional and Expressive Long-Range Portrait Acting from Audio. 56:1-56:11
Generative 3D Shape Synthesis
- Yangguang Li, Xianglong He, Zi-Xin Zou, Zexiang Liu, Wanli Ouyang, Ding Liang, Yan-Pei Cao:

ShapeGen: Towards High-Quality 3D Shape Synthesis. 57:1-57:12 - Hanxiao Wang, Biao Zhang, Jonathan Klein, Dominik L. Michels, Dong-Ming Yan, Peter Wonka:

Autoregressive Generation of Static and Growing Trees. 58:1-58:12 - Yunhan Yang, Yufan Zhou, Yuan-Chen Guo, Zi-Xin Zou, Yukun Huang, Ying-Tian Liu, Hao Xu, Ding Liang, Yan-Pei Cao, Xihui Liu:

OmniPart: Part-Aware 3D Generation with Semantic Decoupling and Structural Cohesion. 59:1-59:12 - Qimin Chen, Yuezhi Yang, Yifan Wang, Vladimir G. Kim, Siddhartha Chaudhuri, Hao Zhang, Zhiqin Chen:

ART-DECO: Arbitrary Text Guidance for 3D Detailizer Construction. 60:1-60:12 - Jingdong Zhang, Weikai Chen, Yuan Liu, Jionghao Wang, Zhengming Yu, Zhuowen Shen, Bo Yang, Wenping Wang, Xin Li:

SPGen: Spherical Projection as Consistent and Flexible Representation for Single Image 3D Shape Generation. 61:1-61:12
Image Restoration, Editing & Enhancement
- Weiguang Zhang, Huangcheng Lu, Maizhen Ning, Xiaowei Huang, Wei Wang, Kaizhu Huang, Qiufeng Wang:

DvD: Unleashing a Generative Paradigm for Document Dewarping via Coordinates-based Diffusion Model. 62:1-62:12 - Sean Man, Guy Ohayon, Ron Raphaeli, Matan Kleiner, Michael Elad:

ELAD: Blind Face Restoration using Expectation-based Likelihood Approximation and Diffusion Prior. 63:1-63:12 - Xin Zhang, Zhuang Zhou, Yixiao Yang, Haijun Xie, Haowen Yan, Hexiang Zhai, Binghua Su:

Self-supervised Underwater Color Restoration via Wavelet-Diffusion Model with Filtered Multi-Scale Feature Distillation. 64:1-64:11
Differentiable Rendering & Applications
- Yaoan Gao, Jiamin Xu, James Tompkin, Qi Wang, Zheng Dong, Hujun Bao, Yujun Shen, Huamin Wang, Changqing Zou, Weiwei Xu:

Efficient Object Reconstruction with Differentiable Area Light Shading. 65:1-65:12 - Mengqi Xia, Bai Xue, Rachel Liang, Holly E. Rushmeier:

Spectral Reconstruction with Uncertainty Quantification via Differentiable Rendering and Null-Space Sampling. 66:1-66:11 - Matthieu Josse, Joey Litalien, Adrien Gruson:

Adaptive Neural Kernels for Gradient-domain Rendering. 67:1-67:11
Perception and Performance in AR/VR Systems
- Dongyeon Kim, Maliha Ashraf, Alexandre Chapiro, Rafal K. Mantiuk:

Supra-threshold Contrast Perception in Augmented Reality. 68:1-68:11 - Daniel Gurman, Daniel P. Spiegel, Kevin W. Rio:

Vertical Binocular Misalignment in AR Impairs Reading Performance. 69:1-69:10 - Jenna Kang, Budmonde Duinkharjav, Niall Williams, Qi Sun:

Performance Analysis of Catch-Up Eye Movements in Visual Tracking. 70:1-70:12
4D Gaussian Splatting for Dynamic Scene Reconstruction
- Daheng Yin, Isaac Ding, Yili Jin, Jianxin Shi, Jiangchuan Liu:

TrackerSplat: Exploiting Point Tracking for Fast and Robust Dynamic 3D Gaussians Reconstruction. 71:1-71:11 - Taeho Kang, Jaeyeon Park, Kyungjin Lee, Youngki Lee:

Clustered Error Correction with Grouped 4D Gaussian Splatting. 72:1-72:12 - Yilong Li, Bo Pang, Yisong Chen, Guoping Wang:

Anchored 4D Gaussian Splatting for Dynamic Novel View Synthesis. 73:1-73:11 - Meng-Li Shih, Ying-Huan Chen, Yu-Lun Liu, Brian Curless:

Prior-Enhanced Gaussian Splatting for Dynamic Scene Reconstruction from Casual Video. 74:1-74:13 - Yutian Chen, Shi Guo, Tianshuo Yang, Lihe Ding, Xiuyuan Yu, Jinwei Gu, Tianfan Xue:

4DSloMo: 4D Reconstruction for High Speed Scene with Asynchronous Capture. 75:1-75:11
Garment & Cloth Modeling, Simulation and Rendering
- Dewen Guo, Zhendong Wang, Zegao Liu, Sheng Li, Guoping Wang, Yin Yang, Huamin Wang:

Progressive Outfit Assembly and Instantaneous Pose Transfer. 76:1-76:12 - Yura Hwang, Jenny Han Lin, Jerry Hsu, Benjamin Mastripolito, James McCann, Cem Yuksel:

Neighbor-Aware Data-Driven Relaxation of Stitch Mesh Models for Knits. 77:1-77:11 - Elias Gueidon, Maurizio M. Chiaramonte:

A Nonconforming Formulation of Cloth. 78:1-78:11
3D Reconstruction & Rendering
- Xiaokun Pan, Zhenzhe Li, Zhichao Ye, Hongjia Zhai, Guofeng Zhang:

EGG-Fusion: Efficient 3D Reconstruction with Geometry-aware Gaussian Surfel on the Fly. 79:1-79:12 - Zhenyuan Liu, Bharath Seshadri, George Kopanas, Bernd Bickel:

Inverse Radiative Transport for Infrared Scenes with Gaussian Primitives. 80:1-80:12 - Kai Deng, Yigong Zhang, Jian Yang, Jin Xie:

GigaSLAM: Large-Scale Monocular SLAM with Hierarchical Gaussian Splats. 81:1-81:10
Animation, Simulation & Deformation
- Jakob Andreas Bærentzen, Jonàs Martínez, Jeppe Revall Frisvad, Sylvain Lefebvre:

Improving Curl Noise. 82:1-82:10 - Yarin Bekor, Gal Michael Harari, Or Perel, Or Litany:

Gaussian See, Gaussian Do: Semantic 3D Motion Transfer from Multiview Video. 83:1-83:10 - Roman Fedotov, Brian Budge, Ladislav Kavan:

QMF-Blend: Quantized Matrix Factorization for Efficient Blendshape Compression. 84:1-84:10 - Haoyuan Shi, Yunxin Li, Xinyu Chen, Longyue Wang, Baotian Hu, Min Zhang:

AniMaker: Multi-Agent Animated Storytelling with MCTS-Driven Clip Generation. 85:1-85:11
Neural Fields and Surface Reconstruction
- Anh Truong, Ahmed H. Mahmoud, Mina Konakovic-Lukovic, Justin Solomon:

Low-Rank Adaptation of Neural Fields. 86:1-86:12 - Mustafa B. Yaldiz, Ishit Mehta, Nithin Raghavan, Andreas Meuleman, Tzu-Mao Li, Ravi Ramamoorthi:

Spectral Prefiltering of Neural Fields. 87:1-87:12 - Lukas Radl, Felix Windisch, Thomas Deixelberger, Jozef Hladky, Michael Steiner, Dieter Schmalstieg, Markus Steinberger:

SOF: Sorted Opacity Fields for Fast Unbounded Surface Reconstruction. 88:1-88:11 - Da Li, Donggang Jia, Yousef Rajeh, Dominik Engel, Ivan Viola:

RaRa Clipper: A Clipper for Gaussian Splatting Based on Ray Tracer and Rasterizer. 89:1-89:10
Vector Graphics & Sketches
- Jinfan Yang, Leo Foord-Kelcey, Suzuran Takikawa, Nicholas Vining, Niloy J. Mitra, Alla Sheffer:

Capturing Non-Linear Human Perspective in Line Drawings. 90:1-90:11 - Hsiao-Yuan Chin, I-Chao Shen, Yi-Ting Chiu, Ariel Shamir, Bing-Yu Chen:

AutoSketch: VLM-assisted Style-Aware Vector Sketch Completion. 91:1-91:11 - Ronghuan Wu, Wanchao Su, Jing Liao:

LayerPeeler: Autoregressive Peeling for Layer-wise Image Vectorization. 92:1-92:20 - Yiming Zhao, Yuanpeng Gao, Yuxuan Luo, Jiwei Duan, Shisong Lin, Longfei Xiong, Zhouhui Lian:

UTDesign: A Unified Framework for Stylized Text Editing and Generation in Graphic Design Images. 93:1-93:11
Intelligent CAD: B-Reps, NURBs & Splines
- Xiang Xu, Pradeep Kumar Jayaraman, Joseph George Lambourne, Yilin Liu, Durvesh Malpure, Pete Meltzer:

AutoBrep : Autoregressive B-Rep Generation with Unified Topology and Geometry. 94:1-94:12 - Yang You, Mikaela Angelina Uy, Jiaqi Han, Rahul Krishna Thomas, Haotong Zhang, Yi Du, Hansheng Chen, Francis Engelmann, Suya You, Leonidas J. Guibas:

Img2CAD: Reverse Engineering 3D CAD Models from Images through VLM-Assisted Conditional Factorization. 95:1-95:12
It's All About the Motion
- Xiaotang Zhang, Ziyi Chang, Qianhui Men, Hubert P. H. Shum:

Motion In-Betweening for Densely Interacting Characters. 96:1-96:11 - Yuxuan Mu, Hung Yu Ling, Yi Shi, Ismael Baira Ojeda, Pengcheng Xi, Chang Shu, Fabio Zinno, Xue Bin Peng:

StableMotion: Training Motion Cleanup Models with Unpaired Corrupted Data. 97:1-97:12
Computational Design & Geometry
- Toshiki Aoki, Tomohiro Tachi, Mina Konakovic-Lukovic:

Discovering Folding Lines for Surface Compression. 98:1-98:12 - Aviv Segall, Jing Ren, Marcel Padilla, Olga Sorkine-Hornung:

Reconfigurable Hinged Kirigami Tessellations. 99:1-99:11 - Logan Numerow, Stelian Coros, Bernhard Thomaszewski:

Star-Shaped Distance Voronoi Diagrams for 3D Metamaterial Design. 100:1-100:10
Compositional and Layout-Guided Image Synthesis
- Gaurav Parmar, Or Patashnik, Kuan-Chieh Wang, Daniil Ostashev, Srinivasa Narasimhan, Jun-Yan Zhu, Daniel Cohen-Or, Kfir Aberman:

Object-level Visual Prompts for Compositional Image Generation. 101:1-101:12 - Guocheng Qian, Daniil Ostashev, Egor Nemchinov, Sergey Tulyakov, Kuan-Chieh Jackson Wang, Kfir Aberman:

ComposeMe: Attribute-Specific Image Prompts for Controllable Human Image Generation. 102:1-102:12 - Junyu Liu, R. Kenny Jones, Daniel Ritchie:

PartComposer: Learning and Composing Part-Level Concepts from Single-Image Examples. 103:1-103:11 - Zedong Zhang, Ying Tai, Jianjun Qian, Jian Yang, Jun Li:

AGSwap: Overcoming Category Boundaries in Object Fusion via Adaptive Group Swapping. 104:1-104:12
Hair & Faces
- Kenji Tojo, Liwen Hu, Nobuyuki Umetani, Hao Li:

Strands2Cards: Automatic Generation of Hair Cards from Strands. 105:1-105:11 - Yuze He, Yanning Zhou, Wang Zhao, Jingwen Ye, Yushi Bai, Kaiwen Xiao, Yong-Jin Liu, Zhongqian Sun, Wei Yang:

CHARM: Control-point-based 3D Anime Hairstyle Auto-Regressive Modeling. 106:1-106:12 - Arvin Lin, Abhijeet Ghosh:

Single-Shot Facial Capture using Polarized RGB Sinusoidal Illumination. 107:1-107:11 - Pramod Rao, Abhimitra Meka, Xilong Zhou, Gereon Fox, Mallikarjun B. R., Fangneng Zhan, Tim Weyrich, Bernd Bickel, Hanspeter Pfister, Wojciech Matusik, Thabo Beeler, Mohamed Elgharib, Marc Habermann, Christian Theobalt:

3DPR: Single Image 3D Portrait Relighting with Generative Priors. 108:1-108:12
Differentiable Physics and Fabrication-Aware Optimization
- Xiao Zhan, Clément Jambon, Evan Thompson, Kenney Ng, Mina Konakovic-Lukovic:

PhysiOpt: Physics-Driven Shape Optimization for 3D Generative Models. 109:1-109:11
Generative Scenes & Panoramas
- Geonung Kim, Janghyeok Han, Sunghyun Cho:

VideoFrom3D: 3D Scene Video Generation via Complementary Image and Video Diffusion Models. 110:1-110:11 - Zhaoyang Zhang, Yannick Hold-Geoffroy, Milos Hasan, Ziwen Chen, Fujun Luan, Julie Dorsey, Yiwei Hu:

Generating 360° Video is What You Need For a 3D Scene. 111:1-111:12 - Avinash Paliwal, Xilong Zhou, Andrii Tsarov, Nima Kalantari:

PanoDreamer: Optimization-Based Single Image to 360 3D Scene With Diffusion. 112:1-112:10 - Manuel-Andreas Schneider, Lukas Höllein, Matthias Nießner:

WorldExplorer: Towards Generating Fully Navigable 3D Scenes. 113:1-113:11
Human & Robot Animation & Behavior
- Zeyi Zhang, Yanju Zhou, Heyuan Yao, Tenglong Ao, Xiaohang Zhan, Libin Liu:

Social Agent: Mastering Dyadic Nonverbal Behavior Generation via Conversational LLM Agents. 114:1-114:12 - Juyeong Hwang, Seong-Eun Hong, JaeYoung Seon, Hyeongyeop Kang:

How Does a Virtual Agent Decide Where to Look? Symbolic Cognitive Reasoning for Embodied Head Rotation. 115:1-115:12 - Haiwei Xue, Yanbo Fan, Xuan Wang, Zhiyong Wu:

Echo: Enhancing Conversational Behavior Generation via Hierarchical Semantic Comprehension with Large Language Models. 116:1-116:9 - Haoran Chen, Yiteng Xu, Yiming Ren, Yaoqin Ye, Xinran Li, Ning Ding, Yuxuan Wu, Yaoze Liu, Peishan Cong, Ziyi Wang, Bushi Liu, Yuhan Chen, Zhiyang Dou, Xiaokun Leng, Manyi Li, Yuexin Ma, Changhe Tu:

SymBridge: A Human-in-the-Loop Cyber-Physical Interactive System for Adaptive Human-Robot Symbiosis. 117:1-117:12 - Guangyan Chen, Meiling Wang, Te Cui, Luojie Yang, Qi Shao, Lin Zhao, Tianle Zhang, Yihang Li, Yi Yang, Yufeng Yue:

Unifying Latent Action and Latent State Pre-training for Policy Learning from Videos. 118:1-118:11 - Ran Dong, Shaowen Ni, Xi Yang:

JoruriPuppet: Learning Tempo-Changing Mechanisms Beyond the Beat for Music-to-Motion Generation with Expressive Metrics. 119:1-119:11
Efficient and Robust Algorithms for Geometric Computing
- Lucas Brifault, David Cohen-Steiner, Mathieu Desbrun:

Efficient and Scalable Spatial Regularization of Optimal Transport. 120:1-120:10 - Yuta Noma, Alec Jacobson, Karan Singh:

Medial Sphere Preconditioning for Knot Untangling and Volume-Filling Curves. 121:1-121:10
4D & Dynamic Scene Generation and Reconstruction
- Ting-Hsuan Liao, Haowen Liu, Yiran Xu, Songwei Ge, Gengshan Yang, Jia-Bin Huang:

PAD3R: Pose-Aware Dynamic 3D Reconstruction from Casual Videos. 122:1-122:11 - Guo Chen, Jiarun Liu, Sicong Du, Chenming Wu, Deqi Li, Shi-Sheng Huang, Guofeng Zhang, Sheng Yang:

GS-RoadPatching: Inpainting Gaussians via 3D Searching and Placing for Driving Scenes. 123:1-123:11 - Hai-Long Qin, Sixian Wang, Guo Lu, Jincheng Dai:

Neural Hamiltonian Deformation Fields for Dynamic Scene Rendering. 124:1-124:11 - Felix Taubner, Ruihang Zhang, Mathieu Tuli, Sherwin Bahmani, David B. Lindell:

MVP4D: Multi-View Portrait Video Diffusion for Animatable 4D Avatars. 125:1-125:11 - Yihao Zhi, Chenghong Li, Hongjie Liao, Xihe Yang, Zhengwentai Sun, Jiahao Chang, Xiaodong Cun, Wensen Feng, Xiaoguang Han:

MV-Performer: Taming Video Diffusion Model for Faithful and Synchronized Multi-view Performer Synthesis. 126:1-126:14
Advanced Light Transport & PDE Solvers
- Anchang Bao, Jie Xu, Enya Shen, Jianmin Wang:

Off-Centered WoS-Type Solvers with Statistical Weighting. 127:1-127:11 - Zhiqi Li, Jinjin He, Barnabás Börcsök, Taiyuan Zhang, Duowen Chen, Tao Du, Ming C. Lin, Greg Turk, Bo Zhu:

An Adjoint Method for Differentiable Fluid Simulation on Flow Maps. 128:1-128:12
3D Reconstruction & View Synthesis
- Gurutva Patle, Nilay Girgaonkar, Nagabhushan Somraj, Rajiv Soundararajan:

AD-GS: Alternating Densification for Sparse-Input 3D Gaussian Splatting. 129:1-129:11 - Jiatong Xia, Lingqiao Liu:

Training-Free Instance-Aware 3D Scene Reconstruction and Diffusion-Based View Synthesis from Sparse Images. 130:1-130:12 - Yu Lu, Hao Pan, Dian Ding, Jiatong Ding, Yongjian Fu, Yi-Chao Chen, Ju Ren, Guangtao Xue:

MODepth: Benchmarking Mobile Multi-frame Monocular Depth Estimation with Optical Image Stabilization. 131:1-131:12 - Li Wang, Yiyu Zhuang, Yanwen Wang, Xun Cao, Chuan Guo, Xinxin Zuo, Hao Zhu:

Sketch2PoseNet: Efficient and Generalized Sketch to 3D Human Pose Prediction. 132:1-132:12 - Zehuan Huang, Haoran Feng, Yang-Tian Sun, Yuan-Chen Guo, Yan-Pei Cao, Lu Sheng:

AnimaX: Animating the Inanimate in 3D with Joint Video-Pose Diffusion Models. 133:1-133:13 - Qi Sun, Can Wang, Jiaxiang Shang, Wensen Feng, Jing Liao:

Animus3D: Text-driven 3D Animation via Motion Score Distillation. 134:1-134:11 - Guoxian Song, Hongyi Xu, Xiaochen Zhao, You Xie, Tianpei Gu, Zenan Li, Chenxu Zhang, Linjie Luo:

X-UniMotion: Animating Human Images with Expressive, Unified and Identity-Agnostic Motion Latents. 135:1-135:11 - Jiayi Zheng, Xiaodong Cun:

FairyGen: Storied Cartoon Video from a Single Child-Drawn Character. 136:1-136:11
Real-Time Rendering & System Optimization
- Yi-Hsin Li, Thomas Sikora, Sebastian Knorr, Mårten Sjöström:

3D SMoE Splatting for Edge-aware Realtime Radiance Field Rendering. 137:1-137:11 - Weikai Lin, Sushant Kondguli, Carl Marshall, Yuhao Zhu:

PowerGS: Display-Rendering Power Co-Optimization for Neural Rendering in Power-Constrained XR Systems. 138:1-138:12 - Wolfgang Tatzgern, Pascal Stadlbauer, Joerg H. Mueller, Martin Winter, Martin Sattlecker, Markus Steinberger:

Sparse Cache Updates for Scalable Distributed Effect-Based Rendering. 139:1-139:11 - Chenyu Zuo, Yazhen Yuan, Zhizhen Wu, Zhijian Liu, Jingzhen Lan, Ming Fu, Yuchi Huo, Rui Wang:

StereoFG: Generating Stereo Frames from Centered Feature Stream. 140:1-140:11
Generative 3D Modeling
- Maxim Gumin, Do Heon Han, Seung Jean Yoo, Aditya Ganeshan, R. Kenny Jones, Kailiang Fu, Rio Aguina-Kang, Stewart Morris, Daniel Ritchie:

Procedural Scene Programs for Open-Universe Scene Generation: LLM-Free Error Correction via Program Search. 141:1-141:11 - Keyu Du, Jingyu Hu, Haipeng Li, Hao Xu, Haibin Huang, Chi-Wing Fu, Shuaicheng Liu:

Hierarchical Neural Semantic Representation for 3D Semantic Correspondence. 142:1-142:11 - Zefan Qu, Zhenwei Wang, Haoyuan Wang, Ke Xu, Gerhard Petrus Hancke, Rynson W. H. Lau:

StyleSculptor: Zero-Shot Style-Controllable 3D Asset Generation with Texture-Geometry Dual Guidance. 143:1-143:12 - Xuancheng Jin, Rengan Xie, Wenting Zheng, Rui Wang, Hujun Bao, Yuchi Huo:

Fuse3D: Generating 3D Assets Controlled by Multi-Image Fusion. 144:1-144:12 - Sauradip Nag, Daniel Cohen-Or, Hao Zhang, Ali Mahdavi Amiri:

In-2-4D: Inbetweening from Two Single-View Images to 4D Generation. 145:1-145:12
Cameras, Sensors, and Acquisition
- Tzofi Klinghoffer, Siddharth Somasundaram, Xiaoyu Xiang, Yuchen Fan, Christian Richardt, Akshat Dave, Ramesh Raskar, Rakesh Ranjan:

Shoot-Bounce-3D: Single-Shot Occlusion-Aware 3D from Lidar by Decomposing Two-Bounce Light. 146:1-146:12 - Jiaheng Li, Qiyu Dai, Lihan Li, Praneeth Chakravarthula, He Sun, Baoquan Chen, Wenzheng Chen:

Robust Single-shot Structured Light 3D Imaging via Neural Feature Decoding. 147:1-147:11 - Dominik Scheuble, Andrea Ramazzina, Hanno Holzhüter, Stefano Gasperini, Steven Peters, Federico Tombari, Mario Bijelic, Felix Heide:

Transient LASSO: Transient Large-Scale Scene Reconstruction. 148:1-148:12 - Giancarlo Pereira, Yidan Gao, Yurii Piadyk, David Fouhey, Cláudio T. Silva, Daniele Panozzo:

LookUp3D: Data-Driven 3D Scanning. 149:1-149:11
Motion Transfer & Control
- Ling-Hao Chen, Yuhong Zhang, Zixin Yin, Zhiyang Dou, Xin Chen, Jingbo Wang, Taku Komura, Lei Zhang:

Motion2Motion: Cross-topology Motion Transfer with Sparse Correspondence. 150:1-150:11 - Qiao Feng, Yiming Huang, Yufu Wang, Jiatao Gu, Lingjie Liu:

PhysHMR: Learning Humanoid Control Policies from Vision for Physically Plausible Human Motion Reconstruction. 151:1-151:10 - Purvi Goel, Guy Tevet, C. Karen Liu, Kayvon Fatahalian:

Generating Detailed Character Motion from Blocking Poses. 152:1-152:7 - Chen Tessler, Yifeng Jiang, Erwin Coumans, Zhengyi Luo, Xue Bin Peng, Gal Chechik:

MaskedManipulator: Versatile Whole-Body Control for Loco-Manipulation. 153:1-153:11
Objects in Parts & Articulation
- Chuhao Chen, Isabella Liu, Xinyue Wei, Hao Su, Minghua Liu:

FreeArt3D: Training-Free Articulated Object Generation using 3D Diffusion. 154:1-154:13 - Ruijie Lu, Yu Liu, Jiaxiang Tang, Junfeng Ni, Yuxiang Wang, Diwen Wan, Gang Zeng, Yixin Chen, Siyuan Huang:

Generating Objects with Part-Articulation from a Single Image. 155:1-155:13 - Sylvia Yuan, Ruoxi Shi, Xinyue Wei, Xiaoshuai Zhang, Hao Su, Minghua Liu:

LARM: A Large Articulated Object Reconstruction Model. 156:1-156:12 - Honghua Chen, Yushi Lan, Yongwei Chen, Xingang Pan:

ArtiLatent: Realistic Articulated 3D Object Generation via Structured Latents. 157:1-157:11 - Wang Zhao, Yan-Pei Cao, Jiale Xu, Yuejiang Dong, Ying Shan:

Assembler: Scalable 3D Part Assembly via Anchor Point Diffusion. 158:1-158:11 - Kuan Tian, Zhihao Hu, Yonghang Guan, Jun Zhang:

LLM-Primitives: Large Language Model for 3D Reconstruction with Primitives. 159:1-159:12
Text-to-Image & Customization
- Armando Fortes, Tianyi Wei, Shangchen Zhou, Xingang Pan:

Bokeh Diffusion: Defocus Blur Control in Text-to-Image Diffusion Models. 160:1-160:11 - Rameen Abdal, Or Patashnik, Ekaterina Deyneka, Hao Chen, Aliaksandr Siarohin, Sergey Tulyakov, Daniel Cohen-Or, Kfir Aberman:

Zero-Shot Dynamic Concept Personalization with Grid-Based LoRA. 161:1-161:10
Material & Reflectance Modeling
- Youxin Xing, Zheng Zeng, Youyang Du, Lu Wang, Beibei Wang:

Diffusion-Guided Relighting for Single-Image SVBRDF Estimation. 162:1-162:10 - Li Wang, Jiajun Zhao, Lianghao Zhang, Fangzhou Gao, Jiawan Zhang:

EBREnv: SVBRDF Estimation in Uncontrolled Environment Lighting via Exemplar-Based Representation. 163:1-163:10 - Zhi Ying, Boxiang Rong, Jingyu Wang, Maoyuan Xu:

Chord: Chain of Rendering Decomposition for PBR Material Estimation from Generated Texture Images. 164:1-164:11 - Jieting Xu, Ziyi Xu, Guoyuan An, Yiwei Hu, Rengan Xie, Zhijian Liu, Dianbing Xi, Wenjun Song, Rui Wang, Yuchi Huo:

AniTex: Light-Geometry Consistent PBR Material Generation for Animatable Objects. 165:1-165:12 - Yunseong Moon, Ryota Maeda, Suhyun Shin, Inseung Hwang, Youngchan Kim, Min H. Kim, Seung-Hwan Baek:

Hyperspectral Polarimetric BRDFs of Real-world Materials. 166:1-166:11
Advanced Fluid and Multiphase Simulation
- Ruolan Li, Yanrui Xu, Yalan Zhang, Jirí Kosinka, Alexandru C. Telea, Jian Chang, Jian Jun Zhang, Xiaojuan Ban, Xiaokun Wang:

Multiphase Particle-Based Simulation of Poro-Elasto-Capillary Effects. 167:1-167:11
Shape Abstraction and Structural Analysis
- Sai Raj Kishore Perla, Aditya Vora, Sauradip Nag, Ali Mahdavi-Amiri, Hao Zhang:

ASIA: Adaptive 3D Segmentation using Few Image Annotations. 168:1-168:12 - Yuhan Wang, Weikai Chen, Zeyu Hu, Runze Zhang, Yingda Yin, Ruoyu Wu, Keyang Luo, Shengju Qian, Yiyan Ma, Hongyi Li, Yuan Gao, Yuhuan Zhou, Hao Luo, Wan Wang, Xiaobin Shen, Zhaowei Li, Kuixin Zhu, Chuanlang Hong, Yueyue Wang, Lijie Feng, Xin Wang, Chen Change Loy:

Light-SQ: Structure-aware Shape Abstraction with Superquadrics for Generated Meshes. 169:1-169:11 - Ningna Wang, Rui Xu, Yibo Yin, Zichun Zhong, Taku Komura, Wenping Wang, Xiaohu Guo:

MATStruct: High-quality Medial Mesh Computation via Structure-aware Variational Optimization. 170:1-170:12 - Zeyu Ma, Adam Finkelstein, Jia Deng:

Temporally Smooth Mesh Extraction for Procedural Scenes with Long-Range Camera Trajectories using Spacetime Octrees. 171:1-171:11 - Milin Kodnongbua, Zihan Zhang, Nicholas Sharp, Adriana Schulz:

Design for Descent: What Makes a Shape Grammar Easy to Optimize? 172:1-172:11
Generative Synthesis, Editing & Customization
- Sam Sartor, Pieter Peers:

Teamwork: Collaborative Diffusion with Low-rank Coordination and Adaptation. 173:1-173:11 - Yuancheng Xu, Wenqi Xian, Li Ma, Julien Philip, Ahmet Levent Tasel, Yiwei Zhao, Ryan D. Burgert, Mingming He, Oliver Hermann, Oliver Pilarski, Rahul Garg, Paul E. Debevec, Ning Yu:

Virtually Being: Customizing Camera-Controllable Video Diffusion Models with Volumetric Performance Captures. 174:1-174:12 - Guiyu Zhang, Chen Shi, Zijian Jiang, Xunzhi Xiang, Jingjing Qian, Shaoshuai Shi, Li Jiang:

Proteus-ID: ID-Consistent and Motion-Coherent Video Customization. 175:1-175:11 - Fulong Ye, Miao Hua, Pengze Zhang, Xinghui Li, Qichao Sun, Songtao Zhao, Qian He, Xinglong Wu:

DreamID: High-Fidelity and Fast diffusion-based Face Swapping via Triplet ID Group Learning. 176:1-176:10 - Tobias Vontobel, Seyedmorteza Sadat, Farnood Salehi, Romann M. Weber:

HiWave: Training-Free High-Resolution Image Generation via Wavelet-Based Diffusion Sampling. 177:1-177:11
Expressive and Structured Gaussian Representations
- Joji Joseph, Bharadwaj Amrutur, Shalabh Bhatnagar:

Gradient-Weighted Feature Back-Projection: A Fast Alternative to Feature Distillation in 3D Gaussian Splatting. 178:1-178:12 - Jinhyeok Kim, Jaehun Bang, Seunghyun Seo, Kyungdon Joo:

Rigidity-Aware 3D Gaussian Deformation from a Single Image. 179:1-179:11 - Yiming Wang, Shaofei Wang, Marko Mihajlovic, Siyu Tang:

Neural Texture Splatting: Expressive 3D Gaussian Splatting for View Synthesis, Geometry, and Dynamic Reconstruction. 180:1-180:12 - Zimu Liao, Jifeng Ding, Siwei Cui, Ruixuan Gong, Boni Hu, Yi Wang, Hengjie Li, Hui Wang, Xingcheng Zhang, Rong Fu:

TC-GS: A Faster Gaussian Splatting Module Utilizing Tensor Cores. 181:1-181:9 - Shuyi Zhou, Shengze Zhong, Kenshi Takayama, Takafumi Taketomi, Takeshi Oishi:

DeMapGS: Simultaneous Mesh Deformation and Surface Attribute Mapping via Gaussian Splatting. 182:1-182:11
Human Motion Synthesis & Interaction
- Ziyu Zhang, Sergey Bashkirov, Dun Yang, Yi Shi, Michael Taylor, Xue Bin Peng:

Physics-Based Motion Imitation with Adversarial Differential Discriminators. 183:1-183:12 - Quang Nguyen, Tri Le, Baoru Huang, Minh Nhat Vu, Ngan Le, Thieu Vo, Anh Nguyen:

Learning Human Motion with Temporally Conditional Mamba. 184:1-184:10 - Hanyang Cao, Heyuan Yao, Libin Liu, Taesoo Kwon:

SRBTrack: Terrain-Adaptive Tracking of a Single-Rigid-Body Character Using Momentum-Mapped Space-Time Optimization. 185:1-185:11 - Sheng Liu, Yuanzhi Liang, Jiepeng Wang, Sidan Du, Chi Zhang, Xuelong Li:

Uni-Inter: Unifying 3D Human Motion Synthesis Across Diverse Interaction Contexts. 186:1-186:11 - Ziyao Huang, Zixiang Zhou, Juan Cao, Yifeng Ma, Yi Chen, Zejing Rao, Zhiyong Xu, Hongmei Wang, Qin Lin, Yuan Zhou, Qinglin Lu, Fan Tang:

HOMA: Towards Generic Human-Object Interaction in Multimodal Driven Human Animation with Weak Conditions. 187:1-187:12
Geometry Processing & Representations
- Shibo Liu, Ligang Liu, Xiao-Ming Fu:

Closed-form Cauchy Coordinates and Their Derivatives for 2D High-order Cages. 188:1-188:11 - Albert Garifullin, Nikolay Mayorov, Alexey Budak, Sergei Nikitin, Egor Prikhodko, Roman Rodionov, Ivan Korotaev, Vladimir Frolov:

Compact shape representation utilizing local surface similarities. 189:1-189:11
Diffusion-Based Image Editing & Manipulation
- Yu Xu, Fan Tang, You Wu, Lin Gao, Oliver Deussen, Hongbin Yan, Jintao Li, Juan Cao, Tong-Yee Lee:

In-Context Brush: Zero-shot Customized Subject Insertion with Context-Aware Latent Space Manipulation. 190:1-190:12 - Yaowei Li, Lingen Li, Zhaoyang Zhang, Xiaoyu Li, Guangzhi Wang, Hongxiang Li, Xiaodong Cun, Ying Shan, Yuexian Zou:

BlobCtrl: Taming Controllable Blob for Element-level Image Editing. 191:1-191:12 - Zixin Yin, Ling-Hao Chen, Lionel M. Ni, Xili Dai:

ConsistEdit: Highly Consistent and Precise Training-free Visual Editing. 192:1-192:11 - Seungyong Lee, Jeong-gi Kwak:

Voost: A Unified and Scalable Diffusion Transformer for Bidirectional Virtual Try-On and Try-Off. 193:1-193:11 - Chong Mou, Yanze Wu, Wenxu Wu, Zinan Guo, Pengze Zhang, Yufeng Cheng, Yiming Luo, Fei Ding, Shiwen Zhang, Xinghui Li, Mengtian Li, Mingcong Liu, Yunsheng Jiang, Shaojin Wu, Songtao Zhao, Jian Zhang, Qian He, Xinglong Wu:

DreamO: A Unified Framework for Image Customization. 194:1-194:12 - Bang Gong, Luchao Qi, Jiaye Wu, Zhicheng Fu, Chunbo Song, John Nicholson, Roni Sengupta:

The Aging Multiverse: Generating Condition-Aware Facial Aging Tree via Training-Free Diffusion. 195:1-195:12
Advanced Representations and Rendering for 3D Scenes
- Tooba Imtiaz, Lucy Chai, Kathryn Heal, Xuan Luo, Jungyeon Park, Jennifer G. Dy, John Flynn:

LVT: Large-Scale Scene Reconstruction via Local View Transformers. 196:1-196:12 - Weihang Liu, Yuhui Zhong, Yuke Li, Xi Chen, Jiadi Cui, Honglong Zhang, Lan Xu, Xin Lou, Yujiao Shi, Jingyi Yu, Yingliang Zhang:

CityGo: Lightweight Urban Modeling and Rendering with Proxy Buildings and Residual Gaussians. 197:1-197:10 - Yunfan Zeng, Li Ma, Pedro V. Sander:

GSWT: Gaussian Splatting Wang Tiles. 198:1-198:11 - Letian Huang, Jie Guo, Jialin Dan, Ruoyu Fu, Yuanqi Li, Yanwen Guo:

Spectral-GS: Taming 3D Gaussian Splatting with Spectral Entropy. 199:1-199:11 - Matthias Sebastian Treder, Pavlos Makridis, Alexis Lechat, Jesus Zarzar, Marina Villanueva Barreiro, Roc Ramon Currius:

A compact stochastic representation for Monte Carlo Path Traced images. 200:1-200:11 - Yohan Poirier-Ginter, Jeffrey Hu, Jean-François Lalonde, George Drettakis:

Editable Physically-based Reflections in Raytraced Gaussian Radiance Fields. 201:1-201:12

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














