


Остановите войну!
for scientists:


default search action
Tianlong Chen
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [j16]Shiwei Liu, Yuesong Tian, Tianlong Chen, Li Shen
:
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance. Int. J. Comput. Vis. 131(10): 2635-2648 (2023) - [j15]Haotao Wang
, Tianlong Chen, Zhangyang Wang, Kede Ma
:
Troubleshooting image segmentation models with human-in-the-loop. Mach. Learn. 112(3): 1033-1051 (2023) - [j14]Tianlong Chen
, Kaixiong Zhou, Keyu Duan, Wenqing Zheng
, Peihao Wang
, Xia Hu, Zhangyang Wang
:
Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive Benchmark Study. IEEE Trans. Pattern Anal. Mach. Intell. 45(3): 2769-2781 (2023) - [j13]Zhangheng Li, Tianlong Chen, Linyi Li, Bo Li, Zhangyang Wang:
Can Pruning Improve Certified Robustness of Neural Networks? Trans. Mach. Learn. Res. 2023 (2023) - [c103]Zhenglun Kong, Haoyu Ma, Geng Yuan, Mengshu Sun, Yanyue Xie, Peiyan Dong, Xin Meng, Xuan Shen, Hao Tang, Minghai Qin, Tianlong Chen, Xiaolong Ma, Xiaohui Xie, Zhangyang Wang, Yanzhi Wang:
Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training. AAAI 2023: 8360-8368 - [c102]Xuxi Chen, Tianlong Chen, Weizhu Chen, Ahmed Hassan Awadallah, Zhangyang Wang, Yu Cheng:
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models. ACL (1) 2023: 8208-8222 - [c101]Junjie Yang, Tianlong Chen, Mingkang Zhu, Fengxiang He, Dacheng Tao, Yingbin Liang, Zhangyang Wang:
Learning to Generalize Provably in Learning to Optimize. AISTATS 2023: 9807-9825 - [c100]Zhangheng Li, Yu Gong, Zhenyu Zhang, Xingyun Xue, Tianlong Chen, Yi Liang, Bo Yuan, Zhangyang Wang:
Accelerable Lottery Tickets with the Mixed-Precision Quantization. CVPR Workshops 2023: 4604-4612 - [c99]Tianlong Chen, Chengyue Gong, Daniel Jesus Diaz, Xuxi Chen, Jordan Tyler Wells, Qiang Liu, Zhangyang Wang, Andrew D. Ellington, Alex Dimakis, Adam R. Klivans:
HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing. ICLR 2023 - [c98]Tianlong Chen, Zhenyu Zhang, Ajay Kumar Jaiswal, Shiwei Liu, Zhangyang Wang:
Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers. ICLR 2023 - [c97]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Xuxi Chen, Qiao Xiao, Boqian Wu, Tommi Kärkkäinen, Mykola Pechenizkiy, Decebal Constantin Mocanu, Zhangyang Wang:
More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity. ICLR 2023 - [c96]Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Kumar Jaiswal, Zhangyang Wang:
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! ICLR 2023 - [c95]Mukund Varma T, Peihao Wang, Xuxi Chen, Tianlong Chen, Subhashini Venugopalan, Zhangyang Wang:
Is Attention All That NeRF Needs? ICLR 2023 - [c94]Junjie Yang, Xuxi Chen, Tianlong Chen, Zhangyang Wang, Yingbin Liang:
M-L2O: Towards Generalizable Learning-to-Optimize by Test-Time Fast Self-Adaptation. ICLR 2023 - [c93]Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen:
Graph Domain Adaptation via Theory-Grounded Spectral Regularization. ICLR 2023 - [c92]Xuxi Chen, Nelson Vadori, Tianlong Chen, Zhangyang Wang:
Learning to Optimize Differentiable Games. ICML 2023: 5036-5051 - [c91]Ajay Kumar Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, Zhangyang Wang:
Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication. ICML 2023: 14679-14690 - [c90]Ajay Kumar Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, Zhangyang Wang:
Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models. ICML 2023: 14691-14701 - [c89]Tianjin Huang, Shiwei Liu, Tianlong Chen, Meng Fang, Li Shen, Vlado Menkovski, Lu Yin, Yulong Pei, Mykola Pechenizkiy:
Enhancing Adversarial Training via Reweighting Optimization Trajectory. ECML/PKDD (1) 2023: 113-130 - [c88]Ajay Jaiswal, Tianlong Chen, Justin F. Rousseau, Yifan Peng, Ying Ding, Zhangyang Wang:
Attend Who is Weak: Pruning-assisted Medical Image Localization under Sophisticated and Implicit Imbalances. WACV 2023: 4976-4985 - [i92]Junjie Yang, Tianlong Chen, Mingkang Zhu, Fengxiang He, Dacheng Tao, Yingbin Liang, Zhangyang Wang:
Learning to Generalize Provably in Learning to Optimize. CoRR abs/2302.11085 (2023) - [i91]Junjie Yang, Xuxi Chen, Tianlong Chen, Zhangyang Wang, Yingbin Liang:
M-L2O: Towards Generalizable Learning-to-Optimize by Test-Time Fast Self-Adaptation. CoRR abs/2303.00039 (2023) - [i90]Tianlong Chen, Zhenyu Zhang, Ajay Jaiswal, Shiwei Liu, Zhangyang Wang:
Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers. CoRR abs/2303.01610 (2023) - [i89]Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Jaiswal, Zhangyang Wang:
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! CoRR abs/2303.02141 (2023) - [i88]Ajay Jaiswal, Shiwei Liu, Tianlong Chen, Zhangyang Wang:
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter. CoRR abs/2306.03805 (2023) - [i87]Ajay Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, Zhangyang Wang:
Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models. CoRR abs/2306.10460 (2023) - [i86]Ajay Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, Zhangyang Wang:
Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication. CoRR abs/2306.10466 (2023) - [i85]Zhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher Ré, Clark W. Barrett, Zhangyang Wang, Beidi Chen:
H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models. CoRR abs/2306.14048 (2023) - [i84]Tianjin Huang, Shiwei Liu, Tianlong Chen, Meng Fang, Li Shen, Vlado Menkovski, Lu Yin, Yulong Pei, Mykola Pechenizkiy:
Enhancing Adversarial Training via Reweighting Optimization Trajectory. CoRR abs/2306.14275 (2023) - [i83]Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Huan Zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang Wang, Sijia Liu:
Robust Mixture-of-Expert Training for Convolutional Neural Networks. CoRR abs/2308.10110 (2023) - [i82]Wenyan Cong, Hanxue Liang, Peihao Wang, Zhiwen Fan, Tianlong Chen, Mukund Varma T, Yi Wang, Zhangyang Wang:
Enhancing NeRF akin to Enhancing LLMs: Generalizable NeRF Transformer with Mixture-of-View-Experts. CoRR abs/2308.11793 (2023) - 2022
- [j12]Tianlong Chen, Xiaohan Chen, Wuyang Chen, Howard Heaton, Jialin Liu, Zhangyang Wang, Wotao Yin:
Learning to Optimize: A Primer and A Benchmark. J. Mach. Learn. Res. 23: 189:1-189:59 (2022) - [j11]Tianlong Chen, Yu Cheng, Zhe Gan, Jianfeng Wang, Lijuan Wang, Jingjing Liu, Zhangyang Wang:
Adversarial Feature Augmentation and Normalization for Visual Recognition. Trans. Mach. Learn. Res. 2022 (2022) - [j10]Tianlong Chen, Sijia Liu, Shiyu Chang, Lisa Amini, Zhangyang Wang:
Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning. Trans. Mach. Learn. Res. 2022 (2022) - [j9]Tianlong Chen, Zhenyu Zhang, Jun Wu, Randy Huang, Sijia Liu, Shiyu Chang, Zhangyang Wang:
Can You Win Everything with A Lottery Ticket? Trans. Mach. Learn. Res. 2022 (2022) - [j8]Chaojian Li
, Wuyang Chen
, Yuchen Gu
, Tianlong Chen
, Yonggan Fu
, Zhangyang Wang
, Yingyan Lin
:
DANCE: DAta-Network Co-optimization for Efficient Segmentation Model Training and Inference. ACM Trans. Design Autom. Electr. Syst. 27(5): 50:1-50:20 (2022) - [j7]Ting-Kuei Hu, Fernando Gama
, Tianlong Chen, Wenqing Zheng
, Zhangyang Wang
, Alejandro Ribeiro
, Brian M. Sadler
:
Scalable Perception-Action-Communication Loops With Convolutional and Graph Neural Networks. IEEE Trans. Signal Inf. Process. over Networks 8: 12-24 (2022) - [c87]Zhe Gan, Yen-Chun Chen, Linjie Li, Tianlong Chen, Yu Cheng, Shuohang Wang, Jingjing Liu, Lijuan Wang, Zicheng Liu:
Playing Lottery Tickets with Vision and Language. AAAI 2022: 652-660 - [c86]Duc N. M. Hoang, Kaixiong Zhou, Tianlong Chen, Xia Hu, Zhangyang Wang:
AutoCoG: A Unified Data-Model Co-Search Framework for Graph Neural Networks. AutoML 2022: 4/1-16 - [c85]Tianlong Chen, Zhenyu Zhang, Yihua Zhang, Shiyu Chang, Sijia Liu, Zhangyang Wang:
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free. CVPR 2022: 588-599 - [c84]Zhiwen Fan, Tianlong Chen, Peihao Wang, Zhangyang Wang:
CADTransformer: Panoptic Symbol Spotting Transformer for CAD Drawings. CVPR 2022: 10976-10986 - [c83]Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Hassan Awadallah, Zhangyang Wang:
The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy. CVPR 2022: 12010-12020 - [c82]Tianlong Chen, Peihao Wang, Zhiwen Fan, Zhangyang Wang:
Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level Physically-Grounded Augmentations. CVPR 2022: 15170-15181 - [c81]Hanxue Liang, Hehe Fan, Zhiwen Fan, Yi Wang, Tianlong Chen, Yu Cheng, Zhangyang Wang:
Point Cloud Domain Adaptation via Masked Local 3D Structure Prediction. ECCV (3) 2022: 156-172 - [c80]Ziyu Jiang, Tianlong Chen, Xuxi Chen, Yu Cheng, Luowei Zhou, Lu Yuan, Ahmed Hassan Awadallah, Zhangyang Wang:
DnA: Improving Few-Shot Transfer Learning with Low-Rank Decomposition and Alignment. ECCV (20) 2022: 239-256 - [c79]Xuxi Chen, Tianlong Chen, Yu Cheng, Weizhu Chen, Ahmed Hassan Awadallah, Zhangyang Wang:
Scalable Learning to Optimize: A Learned Optimizer Can Train Big Models. ECCV (23) 2022: 389-405 - [c78]Mu Yang, Shaojin Ding, Tianlong Chen, Tong Wang, Zhangyang Wang:
Towards Lifelong Learning of Multilingual Text-to-Speech Synthesis. ICASSP 2022: 8022-8026 - [c77]Tianlong Chen, Zhenyu Zhang, Pengjun Wang, Santosh Balachandra, Haoyu Ma, Zehao Wang, Zhangyang Wang:
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training. ICLR 2022 - [c76]Shaojin Ding, Tianlong Chen, Zhangyang Wang:
Audio Lottery: Speech Recognition Made Ultra-Lightweight, Noise-Robust, and Transferable. ICLR 2022 - [c75]Tianshu Huang, Tianlong Chen, Sijia Liu, Shiyu Chang, Lisa Amini, Zhangyang Wang:
Optimizer Amalgamation. ICLR 2022 - [c74]Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu:
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity. ICLR 2022 - [c73]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy:
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training. ICLR 2022 - [c72]Lu Miao, Xiaolong Luo, Tianlong Chen, Wuyang Chen, Dong Liu, Zhangyang Wang:
Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, And No Retraining. ICLR 2022 - [c71]Peihao Wang, Wenqing Zheng, Tianlong Chen, Zhangyang Wang:
Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice. ICLR 2022 - [c70]Yuning You, Yue Cao, Tianlong Chen, Zhangyang Wang, Yang Shen
:
Bayesian Modeling and Uncertainty Quantification for Learning to Optimize: What, Why, and How. ICLR 2022 - [c69]Shixing Yu, Tianlong Chen, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Liu, Zhangyang Wang:
Unified Visual Transformer Compression. ICLR 2022 - [c68]Wenqing Zheng, Tianlong Chen, Ting-Kuei Hu, Zhangyang Wang:
Symbolic Learning to Optimize: Towards Interpretability and Scalability. ICLR 2022 - [c67]Tianlong Chen, Xuxi Chen, Xiaolong Ma, Yanzhi Wang, Zhangyang Wang:
Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets. ICML 2022: 3025-3039 - [c66]Tianlong Chen, Zhenyu Zhang, Sijia Liu, Yang Zhang, Shiyu Chang, Zhangyang Wang:
Data-Efficient Double-Win Lottery Tickets from Robust Pre-training. ICML 2022: 3747-3759 - [c65]Tianlong Chen, Huan Zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, Zhangyang Wang:
Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness. ICML 2022: 3760-3772 - [c64]Ajay Kumar Jaiswal, Haoyu Ma, Tianlong Chen, Ying Ding, Zhangyang Wang:
Training Your Sparse Neural Network Better with Any Mask. ICML 2022: 9833-9844 - [c63]William T. Redman, Tianlong Chen, Zhangyang Wang, Akshunna S. Dogra:
Universality of Winning Tickets: A Renormalization Group Perspective. ICML 2022: 18483-18498 - [c62]Peihao Wang, Zhiwen Fan, Tianlong Chen, Zhangyang Wang:
Neural Implicit Dictionary Learning via Mixture-of-Expert Training. ICML 2022: 22613-22624 - [c61]Yongduo Sui, Tianlong Chen, Pengfei Xia, Shuyao Wang, Bin Li:
Towards Robust Detection and Segmentation Using Vertical and Horizontal Adversarial Training. IJCNN 2022: 1-8 - [c60]Tianlong Chen, Xuemei Cheng, Thomas Tsao:
Border Ownership, Category Selectivity and Beyond. ISVC (2) 2022: 27-38 - [c59]Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu:
You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets. LoG 2022: 8 - [c58]Ruisi Cai, Zhenyu Zhang, Tianlong Chen, Xiaohan Chen, Zhangyang Wang:
Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets. NeurIPS 2022 - [c57]Keyu Duan, Zirui Liu, Peihao Wang, Wenqing Zheng, Kaixiong Zhou, Tianlong Chen, Xia Hu, Zhangyang Wang:
A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking. NeurIPS 2022 - [c56]Ajay Jaiswal, Peihao Wang, Tianlong Chen, Justin F. Rousseau, Ying Ding, Zhangyang Wang:
Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again. NeurIPS 2022 - [c55]Hanxue Liang, Zhiwen Fan, Rishov Sarkar, Ziyu Jiang, Tianlong Chen, Kai Zou, Yu Cheng, Cong Hao, Zhangyang Wang:
M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design. NeurIPS 2022 - [c54]Mukund Varma T, Xuxi Chen, Zhenyu Zhang, Tianlong Chen, Subhashini Venugopalan, Zhangyang Wang:
Sparse Winning Tickets are Data-Efficient Image Recognizers. NeurIPS 2022 - [c53]Tianxin Wei, Yuning You, Tianlong Chen, Yang Shen, Jingrui He, Zhangyang Wang:
Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative. NeurIPS 2022 - [c52]Yihua Zhang, Yuguang Yao, Parikshit Ram, Pu Zhao, Tianlong Chen, Mingyi Hong, Yanzhi Wang, Sijia Liu:
Advancing Model Pruning via Bi-level Optimization. NeurIPS 2022 - [c51]Xinyu Gong, Wuyang Chen, Tianlong Chen, Zhangyang Wang:
Sandwich Batch Normalization: A Drop-In Replacement for Feature Distribution Heterogeneity. WACV 2022: 2957-2967 - [c50]Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen
:
Bringing Your Own View: Graph Contrastive Learning without Prefabricated Data Augmentations. WSDM 2022: 1300-1309 - [i81]Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen:
Bringing Your Own View: Graph Contrastive Learning without Prefabricated Data Augmentations. CoRR abs/2201.01702 (2022) - [i80]Mengshu Sun, Haoyu Ma, Guoliang Kang, Yifan Jiang, Tianlong Chen, Xiaolong Ma, Zhangyang Wang, Yanzhi Wang:
VAQF: Fully Automatic Software-hardware Co-design Framework for Low-bit Vision Transformer. CoRR abs/2201.06618 (2022) - [i79]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy:
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training. CoRR abs/2202.02643 (2022) - [i78]Tianlong Chen, Xuxi Chen, Xiaolong Ma, Yanzhi Wang, Zhangyang Wang:
Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets. CoRR abs/2202.04736 (2022) - [i77]Tianlong Chen, Zhenyu Zhang, Pengjun Wang, Santosh Balachandra, Haoyu Ma, Zehao Wang, Zhangyang Wang:
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training. CoRR abs/2202.09844 (2022) - [i76]Shiwei Liu, Yuesong Tian, Tianlong Chen, Li Shen:
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance. CoRR abs/2203.02770 (2022) - [i75]Peihao Wang, Wenqing Zheng, Tianlong Chen, Zhangyang Wang:
Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice. CoRR abs/2203.05962 (2022) - [i74]Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Hassan Awadallah, Zhangyang Wang:
The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy. CoRR abs/2203.06345 (2022) - [i73]Tianshu Huang, Tianlong Chen, Sijia Liu, Shiyu Chang, Lisa Amini, Zhangyang Wang:
Optimizer Amalgamation. CoRR abs/2203.06474 (2022) - [i72]Wenqing Zheng, Tianlong Chen, Ting-Kuei Hu, Zhangyang Wang:
Symbolic Learning to Optimize: Towards Interpretability and Scalability. CoRR abs/2203.06578 (2022) - [i71]Shixing Yu, Tianlong Chen, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Liu, Zhangyang Wang:
Unified Visual Transformer Compression. CoRR abs/2203.08243 (2022) - [i70]Diganta Misra, Bharat Runwal, Tianlong Chen, Zhangyang Wang, Irina Rish:
APP: Anytime Progressive Pruning. CoRR abs/2204.01640 (2022) - [i69]Tianlong Chen, Zhenyu Zhang, Yihua Zhang, Shiyu Chang, Sijia Liu, Zhangyang Wang:
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free. CoRR abs/2205.11819 (2022) - [i68]Tianlong Chen, Zhenyu Zhang, Sijia Liu, Yang Zhang, Shiyu Chang, Zhangyang Wang:
Data-Efficient Double-Win Lottery Tickets from Robust Pre-training. CoRR abs/2206.04762 (2022) - [i67]Zhangheng Li, Tianlong Chen, Linyi Li, Bo Li, Zhangyang Wang:
Can pruning improve certified robustness of neural networks? CoRR abs/2206.07311 (2022) - [i66]Tianlong Chen, Huan Zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, Zhangyang Wang:
Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness. CoRR abs/2206.07839 (2022) - [i65]Tianlong Chen, Sijia Liu, Shiyu Chang, Lisa Amini, Zhangyang Wang:
Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning. CoRR abs/2206.07842 (2022) - [i64]Ajay Jaiswal, Haoyu Ma, Tianlong Chen, Ying Ding, Zhangyang Wang:
Training Your Sparse Neural Network Better with Any Mask. CoRR abs/2206.12755 (2022) - [i63]Tianlong Chen, Peihao Wang, Zhiwen Fan, Zhangyang Wang:
Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level Physically-Grounded Augmentations. CoRR abs/2207.01164 (2022) - [i62]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Xuxi Chen, Qiao Xiao, Boqian Wu, Mykola Pechenizkiy, Decebal Constantin Mocanu, Zhangyang Wang:
More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity. CoRR abs/2207.03620 (2022) - [i61]Peihao Wang, Zhiwen Fan, Tianlong Chen, Zhangyang Wang:
Neural Implicit Dictionary via Mixture-of-Expert Training. CoRR abs/2207.03691 (2022) - [i60]Mukund Varma T, Peihao Wang, Xuxi Chen, Tianlong Chen, Subhashini Venugopalan, Zhangyang Wang:
Is Attention All NeRF Needs? CoRR abs/2207.13298 (2022) - [i59]Yi Wang, Zhiwen Fan, Tianlong Chen, Hehe Fan, Zhangyang Wang:
Can We Solve 3D Vision Tasks Starting from A 2D Vision Transformer? CoRR abs/2209.07026 (2022) - [i58]Tianxin Wei, Yuning You, Tianlong Chen, Yang Shen, Jingrui He, Zhangyang Wang:
Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative. CoRR abs/2210.03801 (2022) - [i57]Yihua Zhang, Yuguang Yao, Parikshit Ram, Pu Zhao
, Tianlong Chen, Mingyi Hong, Yanzhi Wang, Sijia Liu:
Advancing Model Pruning via Bi-level Optimization. CoRR abs/2210.04092 (2022) - [i56]Keyu Duan, Zirui Liu, Peihao Wang, Wenqing Zheng, Kaixiong Zhou, Tianlong Chen, Xia Hu, Zhangyang Wang:
A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking. CoRR abs/2210.07494 (2022) - [i55]Ajay Jaiswal, Peihao Wang, Tianlong Chen, Justin F. Rousseau, Ying Ding, Zhangyang Wang:
Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again. CoRR abs/2210.08122 (2022) - [i54]Hanxue Liang, Zhiwen Fan, Rishov Sarkar, Ziyu Jiang, Tianlong Chen, Kai Zou, Yu Cheng, Cong Hao, Zhangyang Wang:
M3ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design. CoRR abs/2210.14793 (2022) - [i53]Kaixiong Zhou, Zhenyu Zhang, Shengyuan Chen, Tianlong Chen, Xiao Huang, Zhangyang Wang, Xia Hu:
QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional Networks. CoRR abs/2211.07379 (2022) - [i52]Zhenglun Kong, Haoyu Ma, Geng Yuan, Mengshu Sun, Yanyue Xie, Peiyan Dong, Xin Meng, Xuan Shen, Hao Tang, Minghai Qin, Tianlong Chen, Xiaolong Ma, Xiaohui Xie, Zhangyang Wang, Yanzhi Wang:
Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training. CoRR abs/2211.10801 (2022) - [i51]Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu:
You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets. CoRR abs/2211.15335 (2022) - [i50]Ajay Jaiswal, Tianlong Chen, Justin F. Rousseau, Yifan Peng, Ying Ding, Zhangyang Wang:
Attend Who is Weak: Pruning-assisted Medical Image Localization under Sophisticated and Implicit Imbalances. CoRR abs/2212.02675 (2022) - 2021
- [j6]Wei Shang
, Guohao Jing
, Daode Zhang, Tianlong Chen
, Qihang Liang
:
Adaptive Fixed Time Nonsingular Terminal Sliding-Mode Control for Quadrotor Formation With Obstacle and Inter-Quadrotor Avoidance. IEEE Access 9: 60640-60657 (2021) - [j5]Cheng Zhang, Tianlong Chen
, Wei Shang
, Zhongzhong Zheng
, Huizheng Yuan:
Adaptive Super-Twisting Distributed Formation Control of Multi-Quadrotor Under External Disturbance. IEEE Access 9: 148104-148117 (2021) - [j4]Jianneng Chen, Xianbing Bian, Liqun Chen, Tianlong Chen, Zhiwei Chen, Chennan Yu:
Design and testing of a production line mechanism for continuous cutting and coring of broccoli. Comput. Electron. Agric. 191: 106505 (2021) - [c49]Lida Zhang, Xiaohan Chen, Tianlong Chen, Zhangyang Wang, Bobak J. Mortazavi:
DynEHR: Dynamic adaptation of models with data heterogeneity in electronic health records. BHI 2021: 1-4 - [c48]Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu, Zhiqiang Shen, Zhangyang Wang:
"BNN - BN = ?": Training Binary Neural Networks Without Batch Normalization. CVPR Workshops 2021: 4619-4629 - [c47]Zhihua Wang, Haotao Wang, Tianlong Chen, Zhangyang Wang, Kede Ma
:
Troubleshooting Blind Image Quality Models in the Wild. CVPR 2021: 16256-16265 - [c46]Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang:
The Lottery Tickets Hypothesis for Supervised and Self-Supervised Pre-Training in Computer Vision Models. CVPR 2021: 16306-16316 - [c45]Ting-Kuei Hu, Fernando Gama, Tianlong Chen, Zhangyang Wang, Alejandro Ribeiro
, Brian M. Sadler:
VGAI: End-to-End Learning of Vision-Based Decentralized Controllers for Robot Swarms. ICASSP 2021: 4900-4904 - [c44]Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, Zhangyang Wang:
Robust Overfitting may be mitigated by properly learned smoothening. ICLR 2021 - [c43]Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, Zhangyang Wang:
Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning. ICLR 2021 - [c42]