


Остановите войну!
for scientists:


default search action
Masashi Sugiyama
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [j176]Yosuke Otsubo, Naoya Otani, Megumi Chikasue, Mineyuki Nishino, Masashi Sugiyama:
Root cause estimation of faults in production processes: a novel approach inspired by approximate Bayesian computation. Int. J. Prod. Res. 61(5): 1556-1574 (2023) - [j175]Shota Nakajima, Masashi Sugiyama:
Positive-unlabeled classification under class-prior shift: a prior-invariant approach based on density ratio estimation. Mach. Learn. 112(3): 889-919 (2023) - [j174]Zhenguo Wu, Jiaqi Lv, Masashi Sugiyama:
Learning With Proper Partial Labels. Neural Comput. 35(1): 58-81 (2023) - [j173]Tingting Zhao, Ying Wang, Wei Sun, Yarui Chen, Gang Niu, Masashi Sugiyama:
Representation learning for continuous action spaces is beneficial for efficient policy learning. Neural Networks 159: 137-152 (2023) - [j172]Chen Gong
, Yongliang Ding, Bo Han
, Gang Niu
, Jian Yang
, Jane You
, Dacheng Tao
, Masashi Sugiyama
:
Class-Wise Denoising for Robust Learning Under Label Noise. IEEE Trans. Pattern Anal. Mach. Intell. 45(3): 2835-2848 (2023) - [c261]Takashi Ishida, Ikko Yamane, Nontawat Charoenphakdee, Gang Niu, Masashi Sugiyama:
Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification. ICLR 2023 - [c260]Xin-Qiang Cai, Yao-Xiang Ding, Zi-Xuan Chen, Yuan Jiang, Masashi Sugiyama, Zhi-Hua Zhou:
Seeing Differently, Acting Similarly: Heterogeneously Observable Imitation Learning. ICLR 2023 - [c259]Ruijiang Dong, Feng Liu, Haoang Chi, Tongliang Liu, Mingming Gong, Gang Niu, Masashi Sugiyama, Bo Han:
Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation. ICML 2023: 8260-8275 - [c258]Salah Ghamizi, Jingfeng Zhang, Maxime Cordy, Mike Papadakis, Masashi Sugiyama, Yves Le Traon:
GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks. ICML 2023: 11255-11282 - [c257]Jongyeong Lee, Junya Honda, Chao-Kai Chiang, Masashi Sugiyama:
Optimality of Thompson Sampling with Noninformative Priors for Pareto Bandits. ICML 2023: 18810-18851 - [c256]Yivan Zhang, Masashi Sugiyama:
A Category-theoretical Meta-analysis of Definitions of Disentanglement. ICML 2023: 41596-41612 - [i209]Jongyeong Lee, Junya Honda, Chao-Kai Chiang, Masashi Sugiyama:
Optimality of Thompson Sampling with Noninformative Priors for Pareto Bandits. CoRR abs/2302.01544 (2023) - [i208]Yu-Jie Zhang, Zhen-Yu Zhang, Peng Zhao, Masashi Sugiyama:
Adapting to Continuous Covariate Shift via Online Density Ratio Estimation. CoRR abs/2302.02552 (2023) - [i207]Salah Ghamizi, Jingfeng Zhang, Maxime Cordy, Mike Papadakis, Masashi Sugiyama, Yves Le Traon:
GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks. CoRR abs/2302.02907 (2023) - [i206]Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan S. Kankanhalli:
Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection. CoRR abs/2302.03857 (2023) - [i205]Jongyeong Lee, Chao-Kai Chiang, Masashi Sugiyama:
Asymptotically Optimal Thompson Sampling Based Policy for the Uniform Bandits and the Gaussian Bandits. CoRR abs/2302.14407 (2023) - [i204]Jiaheng Wei, Zhaowei Zhu, Gang Niu, Tongliang Liu, Sijia Liu, Masashi Sugiyama, Yang Liu:
Fairness Improves Learning from Noisily Labeled Long-Tailed Data. CoRR abs/2303.12291 (2023) - [i203]Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan S. Kankanhalli:
Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization. CoRR abs/2305.00374 (2023) - [i202]Jingfeng Zhang, Bo Song, Bo Han, Lei Liu, Gang Niu, Masashi Sugiyama:
Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks. CoRR abs/2305.00399 (2023) - [i201]Ming-Kun Xie, Jiahao Xiao, Hao-Zhe Liu, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang:
Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label Learning. CoRR abs/2305.02795 (2023) - [i200]Yivan Zhang, Masashi Sugiyama:
A Category-theoretical Meta-analysis of Definitions of Disentanglement. CoRR abs/2305.06886 (2023) - [i199]Wei-I Lin, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama:
Enhancing Label Sharing Efficiency in Complementary-Label Learning with Label Augmentation. CoRR abs/2305.08344 (2023) - [i198]Sora Satake, Yoshihiro Nagano, Masashi Sugiyama, Masahiro Fujiwara, Yasutoshi Makino, Hiroyuki Shinoda:
Analysis of Pleasantness Evoked by Various Airborne Ultrasound Tactile Stimuli Using Pairwise Comparisons and the Bradley-Terry Model. CoRR abs/2305.09412 (2023) - [i197]Yivan Zhang, Masashi Sugiyama:
Enriching Disentanglement: Definitions to Metrics. CoRR abs/2305.11512 (2023) - [i196]Hao Chen, Ankit Shah, Jindong Wang, Ran Tao, Yidong Wang, Xing Xie, Masashi Sugiyama, Rita Singh, Bhiksha Raj:
Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations. CoRR abs/2305.12715 (2023) - [i195]Tongtong Fang, Nan Lu, Gang Niu, Masashi Sugiyama:
Generalizing Importance Weighting to A Universal Solver for Distribution Shift Problems. CoRR abs/2305.14690 (2023) - [i194]Jingfeng Zhang, Bo Song, Haohan Wang, Bo Han, Tongliang Liu, Lei Liu, Masashi Sugiyama:
BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise Learning. CoRR abs/2305.18377 (2023) - [i193]Yuhao Wu, Xiaobo Xia, Jun Yu, Bo Han, Gang Niu, Masashi Sugiyama, Tongliang Liu:
Making Binary Classification from Multiple Unlabeled Datasets Almost Free of Supervision. CoRR abs/2306.07036 (2023) - [i192]Shintaro Nakamura, Masashi Sugiyama:
Combinatorial Pure Exploration of Multi-Armed Bandit with a Real Number Action Class. CoRR abs/2306.09202 (2023) - [i191]Ruijiang Dong, Feng Liu, Haoang Chi, Tongliang Liu, Mingming Gong, Gang Niu, Masashi Sugiyama, Bo Han:
Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation. CoRR abs/2307.05948 (2023) - [i190]Jialiang Tang, Shuo Chen, Gang Niu, Masashi Sugiyama, Chen Gong:
Distribution Shift Matters for Knowledge Distillation with Webly Collected Images. CoRR abs/2307.11469 (2023) - [i189]Penghui Yang
, Ming-Kun Xie, Chen-Chen Zong, Lei Feng, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang:
Multi-Label Knowledge Distillation. CoRR abs/2308.06453 (2023) - [i188]Shintaro Nakamura, Masashi Sugiyama:
Thompson Sampling for Real-Valued Combinatorial Pure Exploration of Multi-Armed Bandit. CoRR abs/2308.10238 (2023) - [i187]Chao-Kai Chiang, Masashi Sugiyama:
Unified Risk Analysis for Weakly Supervised Learning. CoRR abs/2309.08216 (2023) - 2022
- [j171]Akira Tanimoto
, So Yamada
, Takashi Takenouchi, Masashi Sugiyama
, Hisashi Kashima:
Improving imbalanced classification using near-miss instances. Expert Syst. Appl. 201: 117130 (2022) - [j170]Hiroki Ishiguro, Takashi Ishida, Masashi Sugiyama:
Learning from Noisy Complementary Labels with Robust Loss Functions. IEICE Trans. Inf. Syst. 105-D(2): 364-376 (2022) - [j169]Yuangang Pan, Ivor W. Tsang, Weijie Chen, Gang Niu, Masashi Sugiyama:
Fast and Robust Rank Aggregation against Model Misspecification. J. Mach. Learn. Res. 23: 23:1-23:35 (2022) - [j168]Songhua Wu, Tongliang Liu, Bo Han, Jun Yu, Gang Niu, Masashi Sugiyama:
Learning from Noisy Pairwise Similarity and Unlabeled Data. J. Mach. Learn. Res. 23: 307:1-307:34 (2022) - [j167]Takayuki Osa
, Voot Tangkaratt, Masashi Sugiyama
:
Discovering diverse solutions in deep reinforcement learning by maximizing state-action-based mutual information. Neural Networks 152: 90-104 (2022) - [j166]Yutaka Matsuo
, Yann LeCun, Maneesh Sahani
, Doina Precup, David Silver, Masashi Sugiyama
, Eiji Uchibe
, Jun Morimoto:
Deep learning, reinforcement learning, and world models. Neural Networks 152: 267-275 (2022) - [j165]Kenji Doya, Karl J. Friston
, Masashi Sugiyama, Joshua B. Tenenbaum:
Neural Networks special issue on Artificial Intelligence and Brain Science. Neural Networks 155: 328-329 (2022) - [j164]Chen Gong
, Jian Yang
, Jane You
, Masashi Sugiyama
:
Centroid Estimation With Guaranteed Efficiency: A General Framework for Weakly Supervised Learning. IEEE Trans. Pattern Anal. Mach. Intell. 44(6): 2841-2855 (2022) - [j163]Ziqing Lu, Chang Xu
, Bo Du
, Takashi Ishida
, Lefei Zhang
, Masashi Sugiyama
:
LocalDrop: A Hybrid Regularization for Deep Neural Networks. IEEE Trans. Pattern Anal. Mach. Intell. 44(7): 3590-3601 (2022) - [j162]Jingfeng Zhang
, Xilie Xu, Bo Han, Tongliang Liu, Lizhen Cui, Gang Niu, Masashi Sugiyama:
NoiLin: Improving adversarial training and correcting stereotype of noisy labels. Trans. Mach. Learn. Res. 2022 (2022) - [c255]Shintaro Nakamura, Han Bao, Masashi Sugiyama:
Robust computation of optimal transport by β-potential regularization. ACML 2022: 770-785 - [c254]Yuting Tang, Nan Lu, Tianyi Zhang, Masashi Sugiyama:
Multi-class Classification from Multiple Unlabeled Datasets with Partial Risk Regularization. ACML 2022: 990-1005 - [c253]Han Bao, Takuya Shimada, Liyuan Xu, Issei Sato, Masashi Sugiyama:
Pairwise Supervision Can Provably Elicit a Decision Boundary. AISTATS 2022: 2618-2640 - [c252]Futoshi Futami, Tomoharu Iwata, Naonori Ueda, Issei Sato, Masashi Sugiyama:
Predictive variational Bayesian inference as risk-seeking optimization. AISTATS 2022: 5051-5083 - [c251]Masashi Sugiyama, Tongliang Liu
, Bo Han, Yang Liu, Gang Niu:
Learning and Mining with Noisy Labels. CIKM 2022: 5152-5155 - [c250]De Cheng, Tongliang Liu
, Yixiong Ning, Nannan Wang, Bo Han, Gang Niu, Xinbo Gao, Masashi Sugiyama:
Instance-Dependent Label-Noise Learning with Manifold-Regularized Transition Matrix Estimation. CVPR 2022: 16609-16618 - [c249]Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, Gang Niu, Mingyuan Zhou, Masashi Sugiyama:
Meta Discovery: Learning to Discover Novel Classes given Very Limited Data. ICLR 2022 - [c248]Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama:
Federated Learning from Only Unlabeled Data with Class-conditional-sharing Clients. ICLR 2022 - [c247]Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama:
Sample Selection with Uncertainty of Losses for Learning with Noisy Labels. ICLR 2022 - [c246]Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Gang Niu, Masashi Sugiyama, Dacheng Tao:
Rethinking Class-Prior Estimation for Positive-Unlabeled Learning. ICLR 2022 - [c245]Fei Zhang, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Tao Qin, Masashi Sugiyama:
Exploiting Class Activation Value for Partial-Label Learning. ICLR 2022 - [c244]Jiaheng Wei, Hangyu Liu, Tongliang Liu, Gang Niu, Masashi Sugiyama, Yang Liu:
To Smooth or Not? When Label Smoothing Meets Noisy Labels. ICML 2022: 23589-23614 - [c243]Zeke Xie, Xinrui Wang, Huishuai Zhang, Issei Sato, Masashi Sugiyama:
Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum. ICML 2022: 24430-24459 - [c242]Xilie Xu, Jingfeng Zhang
, Feng Liu, Masashi Sugiyama, Mohan S. Kankanhalli:
Adversarial Attack and Defense for Non-Parametric Two-Sample Tests. ICML 2022: 24743-24769 - [c241]Hanshu Yan, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, Vincent Y. F. Tan:
Towards Adversarially Robust Deep Image Denoising. IJCAI 2022: 1516-1522 - [c240]Jianan Zhou, Jianing Zhu, Jingfeng Zhang, Tongliang Liu, Gang Niu, Bo Han, Masashi Sugiyama:
Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks. NeurIPS 2022 - [c239]Shuo Chen, Chen Gong, Jun Li, Jian Yang, Gang Niu, Masashi Sugiyama:
Learning Contrastive Embedding in Low-Dimensional Space. NeurIPS 2022 - [c238]Yong Bai, Yu-Jie Zhang, Peng Zhao, Masashi Sugiyama, Zhi-Hua Zhou:
Adapting to Online Label Shift with Provable Guarantees. NeurIPS 2022 - [c237]Yuzhou Cao, Tianchi Cai, Lei Feng, Lihong Gu, Jinjie Gu, Bo An, Gang Niu, Masashi Sugiyama:
Generalizing Consistent Multi-Class Classification with Rejection to be Compatible with Arbitrary Losses. NeurIPS 2022 - [c236]Sen Cui, Jingfeng Zhang, Jian Liang, Bo Han, Masashi Sugiyama, Changshui Zhang:
Synergy-of-Experts: Collaborate to Improve Adversarial Robustness. NeurIPS 2022 - [i186]Hanshu Yan, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, Vincent Y. F. Tan:
Towards Adversarially Robust Deep Image Denoising. CoRR abs/2201.04397 (2022) - [i185]Takashi Ishida, Ikko Yamane, Nontawat Charoenphakdee, Gang Niu, Masashi Sugiyama:
Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification. CoRR abs/2202.00395 (2022) - [i184]Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan S. Kankanhalli:
Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests. CoRR abs/2202.03077 (2022) - [i183]Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, Masashi Sugiyama:
On the Effectiveness of Adversarial Training against Backdoor Attacks. CoRR abs/2202.10627 (2022) - [i182]Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama:
Federated Learning from Only Unlabeled Data with Class-Conditional-Sharing Clients. CoRR abs/2204.03304 (2022) - [i181]Isao Ishikawa, Takeshi Teshima, Koichi Tojo, Kenta Oono, Masahiro Ikeda, Masashi Sugiyama:
Universal approximation property of invertible neural networks. CoRR abs/2204.07415 (2022) - [i180]Futoshi Futami, Tomoharu Iwata, Naonori Ueda, Issei Sato, Masashi Sugiyama:
Excess risk analysis for epistemic uncertainty with application to variational inference. CoRR abs/2206.01606 (2022) - [i179]De Cheng, Tongliang Liu
, Yixiong Ning, Nannan Wang, Bo Han, Gang Niu, Xinbo Gao, Masashi Sugiyama:
Instance-Dependent Label-Noise Learning with Manifold-Regularized Transition Matrix Estimation. CoRR abs/2206.02791 (2022) - [i178]Charles Riou, Junya Honda, Masashi Sugiyama:
The Survival Bandit Problem. CoRR abs/2206.03019 (2022) - [i177]Yuting Tang, Nan Lu, Tianyi Zhang, Masashi Sugiyama:
Learning from Multiple Unlabeled Datasets with Partial Risk Regularization. CoRR abs/2207.01555 (2022) - [i176]Yong Bai, Yu-Jie Zhang, Peng Zhao, Masashi Sugiyama, Zhi-Hua Zhou:
Adapting to Online Label Shift with Provable Guarantees. CoRR abs/2207.02121 (2022) - [i175]Yivan Zhang, Jindong Wang
, Xing Xie, Masashi Sugiyama:
Equivariant Disentangled Transformation for Domain Generalization under Combination Shift. CoRR abs/2208.02011 (2022) - [i174]Nobutaka Ito, Masashi Sugiyama:
Audio Signal Enhancement with Learning from Positive and Unlabelled Data. CoRR abs/2210.15143 (2022) - [i173]Jianan Zhou, Jianing Zhu, Jingfeng Zhang, Tongliang Liu
, Gang Niu, Bo Han, Masashi Sugiyama:
Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks. CoRR abs/2211.00269 (2022) - [i172]Tingting Zhao, Ying Wang, Wei Sun, Yarui Chen, Gang Niu, Masashi Sugiyama:
Representation Learning for Continuous Action Spaces is Beneficial for Efficient Policy Learning. CoRR abs/2211.13257 (2022) - [i171]Shintaro Nakamura, Han Bao, Masashi Sugiyama:
Robust computation of optimal transport by β-potential regularization. CoRR abs/2212.13251 (2022) - 2021
- [j161]Motoya Ohnishi, Gennaro Notomista, Masashi Sugiyama, Magnus Egerstedt:
Constraint learning for control tasks with limited duration barrier functions. Autom. 127: 109504 (2021) - [j160]Tomoya Sakai, Gang Niu, Masashi Sugiyama:
Information-Theoretic Representation Learning for Positive-Unlabeled Classification. Neural Comput. 33(1): 244-268 (2021) - [j159]Takuya Shimada, Han Bao, Issei Sato, Masashi Sugiyama:
Classification From Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization. Neural Comput. 33(5): 1234-1268 (2021) - [j158]Wenkai Xu, Gang Niu, Aapo Hyvärinen
, Masashi Sugiyama:
Direction Matters: On Influence-Preserving Graph Summarization and Max-Cut Principle for Directed Graphs. Neural Comput. 33(8): 2128-2162 (2021) - [j157]Zeke Xie, Fengxiang He, Shaopeng Fu, Issei Sato, Dacheng Tao, Masashi Sugiyama:
Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting. Neural Comput. 33(8): 2163-2192 (2021) - [j156]Taira Tsuchiya, Nontawat Charoenphakdee, Issei Sato, Masashi Sugiyama:
Semisupervised Ordinal Regression Based on Empirical Risk Minimization. Neural Comput. 33(12): 3361-3412 (2021) - [j155]Tianyi Zhang
, Ikko Yamane, Nan Lu, Masashi Sugiyama:
A One-Step Approach to Covariate Shift Adaptation. SN Comput. Sci. 2(4): 319 (2021) - [c235]Voot Tangkaratt, Nontawat Charoenphakdee, Masashi Sugiyama:
Robust Imitation Learning from Noisy Demonstrations. AISTATS 2021: 298-306 - [c234]Han Bao, Masashi Sugiyama:
Fenchel-Young Losses with Skewed Entropies for Class-posterior Probability Estimation. AISTATS 2021: 1648-1656 - [c233]Masahiro Fujisawa, Takeshi Teshima, Issei Sato, Masashi Sugiyama:
γ-ABC: Outlier-Robust Approximate Bayesian Computation Based on a Robust Divergence Estimator. AISTATS 2021: 1783-1791 - [c232]Paavo Parmas, Masashi Sugiyama:
A unified view of likelihood ratio and reparameterization gradients. AISTATS 2021: 4078-4086 - [c231]Masashi Sugiyama:
Mixture Proportion Estimation in Weakly Supervised Learning. CIKM Workshops 2021 - [c230]Nontawat Charoenphakdee, Jayakorn Vongkulbhisal, Nuttapong Chairatanakul, Masashi Sugiyama:
On Focal Loss for Class-Posterior Probability Estimation: A Theoretical Perspective. CVPR 2021: 5202-5211 - [c229]Alon Jacovi, Gang Niu, Yoav Goldberg, Masashi Sugiyama:
Scalable Evaluation and Improvement of Document Set Expansion via Neural Positive-Unlabeled Learning. EACL 2021: 581-592 - [c228]Zeke Xie, Issei Sato, Masashi Sugiyama:
A Diffusion Theory For Deep Learning Dynamics: Stochastic Gradient Descent Exponentially Favors Flat Minima. ICLR 2021 - [c227]Jingfeng Zhang
, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, Mohan S. Kankanhalli:
Geometry-aware Instance-reweighted Adversarial Training. ICLR 2021 - [c226]Antonin Berthon, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama:
Confidence Scores Make Instance-dependent Label-noise Learning Possible. ICML 2021: 825-836 - [c225]Yuzhou Cao, Lei Feng, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama:
Learning from Similarity-Confidence Data. ICML 2021: 1272-1282 - [c224]Nontawat Charoenphakdee, Zhenghang Cui, Yivan Zhang, Masashi Sugiyama:
Classification with Rejection Based on Cost-sensitive Classification. ICML 2021: 1507-1517 - [c223]Shuo Chen, Gang Niu, Chen Gong, Jun Li, Jian Yang, Masashi Sugiyama:
Large-Margin Contrastive Learning with Distance Polarization Regularizer. ICML 2021: 1673-1683 - [c222]Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama:
Learning Diverse-Structured Networks for Adversarial Robustness. ICML 2021: 2880-2891 - [c221]Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu
, Gang Niu, Bo An, Masashi Sugiyama:
Pointwise Binary Classification with Pairwise Confidence Comparisons. ICML 2021: 3252-3262 - [c220]Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama:
Maximum Mean Discrepancy Test is Aware of Adversarial Attacks. ICML 2021: 3564-3575 - [c219]Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama:
Provably End-to-end Label-noise Learning without Anchor Points. ICML 2021: 6403-6413 - [c218]Nan Lu, Shida Lei, Gang Niu, Issei Sato, Masashi Sugiyama:
Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification. ICML 2021: 7134-7144 - [c217]Zeke Xie, Li Yuan, Zhanxing Zhu, Masashi Sugiyama:
Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization. ICML 2021: 11448-11458 - [c216]Ikko Yamane, Junya Honda, Florian Yger, Masashi Sugiyama:
Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences. ICML 2021: 11637-11647 - [c215]Hanshu Yan, Jingfeng Zhang, Gang Niu, Jiashi Feng, Vincent Y. F. Tan, Masashi Sugiyama:
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection. ICML 2021: 11693-11703 - [c214]Shuhei M. Yoshida, Takashi Takenouchi, Masashi Sugiyama:
Lower-Bounded Proper Losses for Weakly Supervised Classification. ICML 2021: 12110-12120 - [c213]Yivan Zhang, Gang Niu, Masashi Sugiyama:
Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization. ICML 2021: 12501-12512 - [c212]Futoshi Futami, Tomoharu Iwata, Naonori Ueda, Issei Sato, Masashi Sugiyama:
Loss function based second-order Jensen inequality and its application to particle variational inference. NeurIPS 2021: 6803-6815 - [c211]Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama:
Probabilistic Margins for Instance Reweighting in Adversarial Training. NeurIPS 2021: 23258-23269 - [c210]Soham Dan, Han Bao, Masashi Sugiyama:
Learning from Noisy Similar and Dissimilar Data. ECML/PKDD (2) 2021: 233-249 - [c209]Takeshi Teshima, Masashi Sugiyama:
Incorporating causal graphical prior knowledge into predictive modeling via simple data augmentation. UAI 2021: 86-96 - [i170]Nontawat Charoenphakdee, Jongyeong Lee, Masashi Sugiyama:
A Symmetric Loss Perspective of Reliable Machine Learning. CoRR abs/2101.01366 (2021) - [i169]Masato Ishii, Masashi Sugiyama:
Source-free Domain Adaptation via Distributional Alignment by Matching Batch Normalization Statistics. CoRR abs/2101.10842 (2021) - [i168]Shida Lei, Nan Lu, Gang Niu, Issei Sato, Masashi Sugiyama:
Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification. CoRR abs/2102.00678 (2021) - [i167]Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama:
Learning Diverse-Structured Networks for Adversarial Robustness. CoRR abs/2102.01886 (2021) - [i166]Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama:
Provably End-to-end Label-Noise Learning without Anchor Points. CoRR abs/2102.02400 (2021) - [i165]Yivan Zhang, Gang Niu, Masashi Sugiyama:
Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization. CoRR abs/2102.02414 (2021) - [i164]Jianing Zhu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Hongxia Yang, Mohan S. Kankanhalli, Masashi Sugiyama:
Understanding the Interaction of Adversarial Training with Noisy Labels. CoRR abs/2102.03482 (2021) - [i163]