default search action
Se-Young Yun
Se-young Yun – Seyoung Yun – SeYoung Yun
Person information
- affiliation: Korea Advanced Institute of Science and Technology (KAIST), Graduate School of AI Korea, Daejeon, South Korea
- affiliation: Los Alamos National Laboratory, NM, USA
- affiliation: Microsoft Research, Cambridge, UK
- affiliation: Microsoft Research-INRIA Joint Centre, Paris, France
- affiliation: KTH Royal Institute of Technology, Stockholm, Sweden
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j14]Kaito Ariu, Jungseul Ok, Alexandre Proutière, Seyoung Yun:
Optimal clustering from noisy binary feedback. Mach. Learn. 113(5): 2733-2764 (2024) - [c77]Yongjin Yang, Taehyeon Kim, Se-Young Yun:
Leveraging Normalization Layer in Adapters with Progressive Learning and Adaptive Distillation for Cross-Domain Few-Shot Learning. AAAI 2024: 16370-16378 - [c76]Junghyun Lee, Se-Young Yun, Kwang-Sung Jun:
Improved Regret Bounds of (Multinomial) Logistic Bandits via Regret-to-Confidence-Set Conversion. AISTATS 2024: 4474-4482 - [c75]Gihun Lee, Minchan Jeong, Sangmook Kim, Jaehoon Oh, Se-Young Yun:
FedSOL: Stabilized Orthogonal Learning with Proximal Restrictions in Federated Learning. CVPR 2024: 12512-12522 - [c74]Taehyeon Kim, Joonkee Kim, Gihun Lee, Se-Young Yun:
Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions. ICLR 2024 - [c73]Mingyu Kim, Jun-Seong Kim, Se-Young Yun, Jin-Hwa Kim:
Synergistic Integration of Coordinate Network and Tensorial Feature for Improving Neural Radiance Fields from Sparse Inputs. ICML 2024 - [c72]Jongwoo Ko, Sungnyun Kim, Tianyi Chen, Se-Young Yun:
DistiLLM: Towards Streamlined Distillation for Large Language Models. ICML 2024 - [c71]Yujin Kim, Jaehong Yoon, Seonghyeon Ye, Sangmin Bae, Namgyu Ho, Sung Ju Hwang, Se-Young Yun:
Carpe diem: On the Evaluation of World Knowledge in Lifelong Language Models. NAACL-HLT 2024: 5401-5415 - [i96]Jongwoo Ko, Sungnyun Kim, Tianyi Chen, Se-Young Yun:
DistiLLM: Towards Streamlined Distillation for Large Language Models. CoRR abs/2402.03898 (2024) - [i95]Taehyeon Kim, Donggyu Kim, Se-Young Yun:
Revisiting Early-Learning Regularization When Federated Learning Meets Noisy Labels. CoRR abs/2402.05353 (2024) - [i94]Marc Bartholet, Taehyeon Kim, Ami Beuret, Se-Young Yun, Joachim M. Buhmann:
Non-linear Fusion in Federated Learning: A Hypernetwork Approach to Federated Domain Generalization. CoRR abs/2402.06974 (2024) - [i93]Haeju Lee, Minchan Jeong, Se-Young Yun, Kee-Eung Kim:
Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning. CoRR abs/2402.08594 (2024) - [i92]Jung-Hun Kim, Milan Vojnovic, Se-Young Yun:
Rotting Infinitely Many-armed Bandits beyond the Worst-case Rotting: An Adaptive Approach. CoRR abs/2404.14202 (2024) - [i91]Yongjin Yang, Sihyeon Kim, SangMook Kim, Gyubok Lee, Se-Young Yun, Edward Choi:
Towards Unbiased Evaluation of Detecting Unanswerable Questions in EHRSQL. CoRR abs/2405.01588 (2024) - [i90]Mingyu Kim, Jun-Seong Kim, Se-Young Yun, Jin-Hwa Kim:
Synergistic Integration of Coordinate Network and Tensorial Feature for Improving Neural Radiance Fields from Sparse Inputs. CoRR abs/2405.07857 (2024) - [i89]Felix den Breejen, Sangmin Bae, Stephen Cha, Se-Young Yun:
Why In-Context Learning Transformers are Tabular Data Classifiers. CoRR abs/2405.13396 (2024) - [i88]Minu Kim, Yongsik Lee, Sehyeok Kang, Jihwan Oh, Song Chong, Seyoung Yun:
Preference Alignment with Flow Matching. CoRR abs/2405.19806 (2024) - [i87]Seongyoon Kim, Minchan Jeong, Sungnyun Kim, Sungwoo Cho, Sumyeong Ahn, Se-Young Yun:
FedDr+: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning. CoRR abs/2406.02355 (2024) - [i86]Namgyu Ho, Sangmin Bae, Taehyeon Kim, Hyunjik Jo, Yireun Kim, Tal Schuster, Adam Fisch, James Thorne, Se-Young Yun:
Block Transformer: Global-to-Local Language Modeling for Fast Inference. CoRR abs/2406.02657 (2024) - [i85]Euiin Yi, Taehyeon Kim, Hongseok Jeung, Du-Seong Chang, Se-Young Yun:
Towards Fast Multilingual LLM Inference: Speculative Decoding and Specialized Drafters. CoRR abs/2406.16758 (2024) - [i84]Gihun Lee, Minchan Jeong, Yujin Kim, Hojung Jung, Jaehoon Oh, Sangmook Kim, Se-Young Yun:
BAPO: Base-Anchored Preference Optimization for Personalized Alignment in Large Language Models. CoRR abs/2407.00693 (2024) - [i83]Sungnyun Kim, Kangwook Jang, Sangmin Bae, Hoirin Kim, Se-Young Yun:
Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech Recognition. CoRR abs/2407.03563 (2024) - [i82]Junghyun Lee, Se-Young Yun, Kwang-Sung Jun:
A Unified Confidence Sequence for Generalized Linear Models, with Applications to Bandits. CoRR abs/2407.13977 (2024) - [i81]Sihyeon Kim, Boryeong Cho, Sangmin Bae, Sumyeong Ahn, Se-Young Yun:
VACoDe: Visual Augmented Contrastive Decoding. CoRR abs/2408.05337 (2024) - [i80]Jihwan Oh, Sungnyun Kim, Gahee Kim, Sunghwan Kim, Se-Young Yun:
Diffusion-based Episodes Augmentation for Offline Multi-Agent Reinforcement Learning. CoRR abs/2408.13092 (2024) - [i79]Woojin Chung, Jiwoo Hong, Na Min An, James Thorne, Se-Young Yun:
Stable Language Model Pre-training by Reducing Embedding Variability. CoRR abs/2409.07787 (2024) - 2023
- [j13]Kyeongryeol Go, Mingyu Kim, Seyoung Yun:
Meta-Learning Amidst Heterogeneity and Ambiguity. IEEE Access 11: 1578-1592 (2023) - [j12]Mingyu Kim, Jihwan Oh, Yongsik Lee, Joonkee Kim, Seonghwan Kim, Song Chong, Seyoung Yun:
The StarCraft Multi-Agent Exploration Challenges: Learning Multi-Stage Tasks and Environmental Factors Without Precise Reward Functions. IEEE Access 11: 37854-37868 (2023) - [j11]Dabeen Lee, Milan Vojnovic, Se-Young Yun:
Test Score Algorithms for Budgeted Stochastic Utility Maximization. INFORMS J. Optim. 5(1): 27-67 (2023) - [j10]Milan Vojnovic, Se-Young Yun, Kaifang Zhou:
Accelerated MM Algorithms for Inference of Ranking Scores from Comparison Data. Oper. Res. 71(4): 1318-1342 (2023) - [c70]Sumyeong Ahn, Se-Young Yun:
Denoising after Entropy-Based Debiasing a Robust Training Method for Dataset Bias with Noisy Labels. AAAI 2023: 169-177 - [c69]Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun:
Self-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-network. AAAI 2023: 197-205 - [c68]Jongwoo Ko, Bongsoo Yi, Se-Young Yun:
A Gift from Label Smoothing: Robust Training with Adaptive Label Smoothing via Auxiliary Classifier under Label Noise. AAAI 2023: 8325-8333 - [c67]Namgyu Ho, Laura Schmid, Se-Young Yun:
Large Language Models Are Reasoning Teachers. ACL (1) 2023: 14852-14882 - [c66]Jung-Hun Kim, Se-Young Yun, Minchan Jeong, Junhyun Nam, Jinwoo Shin, Richard Combes:
Contextual Linear Bandits under Noisy Features: Towards Bayesian Oracles. AISTATS 2023: 1624-1645 - [c65]Yassir Jedra, Junghyun Lee, Alexandre Proutière, Se-Young Yun:
Nearly Optimal Latent State Decoding in Block MDPs. AISTATS 2023: 2805-2904 - [c64]Jihwan Oh, Joonkee Kim, Minchan Jeong, Se-Young Yun:
Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent Reinforcement Learning. AAMAS 2023: 1597-1605 - [c63]Sangmook Kim, Sangmin Bae, Hwanjun Song, Se-Young Yun:
Re-Thinking Federated Active Learning Based on Inter-Class Diversity. CVPR 2023: 3944-3953 - [c62]Sungnyun Kim, Sangmin Bae, Se-Young Yun:
Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning. CVPR 2023: 7537-7547 - [c61]Jongwoo Ko, Seungjoon Park, Minchan Jeong, Sukjin Hong, Euijai Ahn, Du-Seong Chang, Se-Young Yun:
Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective. EACL (Findings) 2023: 158-175 - [c60]Haeju Lee, Minchan Jeong, Se-Young Yun, Kee-Eung Kim:
Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning. EMNLP (Findings) 2023: 4942-4958 - [c59]Yongjin Yang, Joonkee Kim, Yujin Kim, Namgyu Ho, James Thorne, Se-Young Yun:
HARE: Explainable Hate Speech Detection with Step-by-Step Reasoning. EMNLP (Findings) 2023: 5490-5505 - [c58]Sangmin Bae, Jongwoo Ko, Hwanjun Song, Se-Young Yun:
Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding. EMNLP 2023: 5910-5924 - [c57]Jongwoo Ko, Seungjoon Park, Yujin Kim, Sumyeong Ahn, Du-Seong Chang, Euijai Ahn, Se-Young Yun:
NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models. EMNLP (Findings) 2023: 6076-6093 - [c56]Sumyeong Ahn, Jongwoo Ko, Se-Young Yun:
CUDA: Curriculum of Data Augmentation for Long-tailed Recognition. ICLR 2023 - [c55]Sumyeong Ahn, Seongyoon Kim, Se-Young Yun:
Mitigating Dataset Bias by Using Per-Sample Gradient. ICLR 2023 - [c54]Kangwook Jang, Sungnyun Kim, Se-Young Yun, Hoirin Kim:
Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation. INTERSPEECH 2023: 316-320 - [c53]Sangmin Bae, June-Woo Kim, Won-Yang Cho, Hyerim Baek, Soyoun Son, Byungjo Lee, Changwan Ha, Kyongpil Tae, Sungnyun Kim, Se-Young Yun:
Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification. INTERSPEECH 2023: 5436-5440 - [c52]Hojoon Lee, Hanseul Cho, Hyunseung Kim, Daehoon Gwak, Joonkee Kim, Jaegul Choo, Se-Young Yun, Chulhee Yun:
PLASTIC: Improving Input and Label Plasticity for Sample Efficient Reinforcement Learning. NeurIPS 2023 - [c51]Junghyun Lee, Hanseul Cho, Se-Young Yun, Chulhee Yun:
Fair Streaming Principal Component Analysis: Statistical and Algorithmic Viewpoint. NeurIPS 2023 - [c50]Junghyun Lee, Laura Schmid, Se-Young Yun:
Flooding with Absorption: An Efficient Protocol for Heterogeneous Bandits over Complex Networks. OPODIS 2023: 20:1-20:25 - [i78]Jongwoo Ko, Seungjoon Park, Minchan Jeong, Sukjin Hong, Euijai Ahn, Du-Seong Chang, Se-Young Yun:
Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective. CoRR abs/2302.01530 (2023) - [i77]Sumyeong Ahn, Jongwoo Ko, Se-Young Yun:
CUDA: Curriculum of Data Augmentation for Long-Tailed Recognition. CoRR abs/2302.05499 (2023) - [i76]Jihwan Oh, Joonkee Kim, Minchan Jeong, Se-Young Yun:
Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent Reinforcement Learning. CoRR abs/2303.01768 (2023) - [i75]Junghyun Lee, Laura Schmid, Se-Young Yun:
Communication-Efficient Collaborative Heterogeneous Bandits in Networks. CoRR abs/2303.05445 (2023) - [i74]Sungnyun Kim, Sangmin Bae, Se-Young Yun:
Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning. CoRR abs/2303.11101 (2023) - [i73]Sangmook Kim, Sangmin Bae, Hwanjun Song, Se-Young Yun:
Re-thinking Federated Active Learning based on Inter-class Diversity. CoRR abs/2303.12317 (2023) - [i72]Kangwook Jang, Sungnyun Kim, Se-Young Yun, Hoirin Kim:
Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation. CoRR abs/2305.11685 (2023) - [i71]Sangmin Bae, June-Woo Kim, Won-Yang Cho, Hyerim Baek, Soyoun Son, Byungjo Lee, Changwan Ha, Kyongpil Tae, Sungnyun Kim, Se-Young Yun:
Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification. CoRR abs/2305.14032 (2023) - [i70]Hojoon Lee, Hanseul Cho, Hyunseung Kim, Daehoon Gwak, Joonkee Kim, Jaegul Choo, Se-Young Yun, Chulhee Yun:
Enhancing Generalization and Plasticity for Sample Efficient Reinforcement Learning. CoRR abs/2306.10711 (2023) - [i69]Kaito Ariu, Alexandre Proutière, Se-Young Yun:
Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model. CoRR abs/2306.12968 (2023) - [i68]Gihun Lee, Minchan Jeong, Sangmook Kim, Jaehoon Oh, Se-Young Yun:
FedSoL: Bridging Global Alignment and Local Generality in Federated Learning. CoRR abs/2308.12532 (2023) - [i67]Seongha Eom, Namgyu Ho, Jaehoon Oh, Se-Young Yun:
Cross-Modal Retrieval Meets Inference: Improving Zero-Shot Classification with Cross-Modal Retrieval. CoRR abs/2308.15273 (2023) - [i66]Sangmin Bae, Jongwoo Ko, Hwanjun Song, Se-Young Yun:
Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding. CoRR abs/2310.05424 (2023) - [i65]Seonghyun Park, Narae Ryu, Gahee Kim, Dongyeop Woo, Se-Young Yun, Sungsoo Ahn:
Non-backtracking Graph Neural Networks. CoRR abs/2310.07430 (2023) - [i64]Jongwoo Ko, Seungjoon Park, Yujin Kim, Sumyeong Ahn, Du-Seong Chang, Euijai Ahn, Se-Young Yun:
NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models. CoRR abs/2310.10054 (2023) - [i63]Sumyeong Ahn, Sihyeon Kim, Jongwoo Ko, Se-Young Yun:
Fine tuning Pre trained Models for Robustness Under Noisy Labels. CoRR abs/2310.17668 (2023) - [i62]Junghyun Lee, Se-Young Yun, Kwang-Sung Jun:
Improved Regret Bounds of (Multinomial) Logistic Bandits via Regret-to-Confidence-Set Conversion. CoRR abs/2310.18554 (2023) - [i61]Junghyun Lee, Hanseul Cho, Se-Young Yun, Chulhee Yun:
Fair Streaming Principal Component Analysis: Statistical and Algorithmic Viewpoint. CoRR abs/2310.18593 (2023) - [i60]Taehyeon Kim, Joonkee Kim, Gihun Lee, Se-Young Yun:
Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions. CoRR abs/2311.00233 (2023) - [i59]Yongjin Yang, Joonkee Kim, Yujin Kim, Namgyu Ho, James Thorne, Se-young Yun:
HARE: Explainable Hate Speech Detection with Step-by-Step Reasoning. CoRR abs/2311.00321 (2023) - [i58]Felix den Breejen, Sangmin Bae, Stephen Cha, Tae-Young Kim, Seounghyun Koh, Se-Young Yun:
Fine-Tuning the Retrieval Mechanism for Tabular Deep Learning. CoRR abs/2311.07343 (2023) - [i57]Yujin Kim, Jaehong Yoon, Seonghyeon Ye, Sung Ju Hwang, Se-young Yun:
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models. CoRR abs/2311.08106 (2023) - [i56]Seongyoon Kim, Gihun Lee, Jaehoon Oh, Se-Young Yun:
FedFN: Feature Normalization for Alleviating Data Heterogeneity Problem in Federated Learning. CoRR abs/2311.13267 (2023) - [i55]Yongjin Yang, Jongwoo Ko, Se-Young Yun:
Improving Adaptability and Generalizability of Efficient Transfer Learning for Vision-Language Models. CoRR abs/2311.15569 (2023) - [i54]Yongjin Yang, Taehyeon Kim, Se-Young Yun:
Leveraging Normalization Layer in Adapters With Progressive Learning and Adaptive Distillation for Cross-Domain Few-Shot Learning. CoRR abs/2312.11260 (2023) - 2022
- [j9]Sungnyun Kim, Se-Young Yun:
Calibration of Few-Shot Classification Tasks: Mitigating Misconfidence From Distribution Mismatch. IEEE Access 10: 53894-53908 (2022) - [j8]Taehyeon Kim, Se-Young Yun:
Revisiting Orthogonality Regularization: A Study for Convolutional Neural Networks in Image Classification. IEEE Access 10: 69741-69749 (2022) - [c49]Sangmook Kim, Wonyoung Shin, Soohyuk Jang, Hwanjun Song, Se-Young Yun:
FedRN: Exploiting k-Reliable Neighbors Towards Robust Federated Learning. CIKM 2022: 972-981 - [c48]Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun:
ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning. CIKM 2022: 4359-4363 - [c47]Jaehoon Oh, Jongwoo Ko, Se-Young Yun:
Synergy with Translation Artifacts for Training and Inference in Multilingual Tasks. EMNLP 2022: 6747-6754 - [c46]Mingyu Kim, Kyeongryeol Go, Se-Young Yun:
Neural Processes with Stochastic Attention: Paying more attention to the context dataset. ICLR 2022 - [c45]Jaehoon Oh, Sangmook Kim, Se-Young Yun:
FedBABU: Toward Enhanced Representation for Federated Image Classification. ICLR 2022 - [c44]Sungnyun Kim, Jaewoo Shin, Seongha Eom, Jihwan Oh, Se-Young Yun:
Real-time and Explainable Detection of Epidemics with Global News Data. Healthcare AI and COVID-19 Workshop 2022: 73-90 - [c43]Jung-Hun Kim, Milan Vojnovic, Se-Young Yun:
Rotting Infinitely Many-Armed Bandits. ICML 2022: 11229-11254 - [c42]Daniel Bienstock, Minchan Jeong, Apurv Shukla, Se-Young Yun:
Robust Streaming PCA. NeurIPS 2022 - [c41]Gihun Lee, Minchan Jeong, Yongjin Shin, Sangmin Bae, Se-Young Yun:
Preservation of the Global Knowledge by Not-True Distillation in Federated Learning. NeurIPS 2022 - [c40]Gihun Lee, SangMook Kim, Joonkee Kim, Se-Young Yun:
MEDIAR: Harmony of Data-Centric and Model-Centric for Multi-Modality Microscopy. Cell Segmentation Challenge @ NeurIPS 2022: 1-16 - [c39]Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun:
Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty. NeurIPS 2022 - [i53]Jung-Hun Kim, Milan Vojnovic, Se-Young Yun:
Rotting infinitely many-armed bandits. CoRR abs/2201.12975 (2022) - [i52]Jaeyeon Ahn, Taehyeon Kim, Seyoung Yun:
Mold into a Graph: Efficient Bayesian Optimization over Mixed-Spaces. CoRR abs/2202.00893 (2022) - [i51]Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun:
Understanding Cross-Domain Few-Shot Learning: An Experimental Study. CoRR abs/2202.01339 (2022) - [i50]Stephen Cha, Taehyeon Kim, Hayeon Lee, Se-Young Yun:
SuperNet in Neural Architecture Search: A Taxonomic Survey. CoRR abs/2204.03916 (2022) - [i49]Mingyu Kim, Kyeongryeol Go, Se-Young Yun:
Neural Processes with Stochastic Attention: Paying more attention to the context dataset. CoRR abs/2204.05449 (2022) - [i48]Sangmook Kim, Wonyoung Shin, Soohyuk Jang, Hwanjun Song, Se-Young Yun:
FedRN: Exploiting k-Reliable Neighbors Towards Robust Federated Learning. CoRR abs/2205.01310 (2022) - [i47]Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun:
ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning. CoRR abs/2205.05282 (2022) - [i46]Yujin Kim, Jaehoon Oh, Sungnyun Kim, Se-Young Yun:
Revisiting the Updates of a Pre-trained Model for Few-shot Learning. CoRR abs/2205.07874 (2022) - [i45]Jung-Hun Kim, Se-Young Yun:
Adversarial Bandits Robust to S-Switch Regret. CoRR abs/2205.14839 (2022) - [i44]Sumyeong Ahn, Seongyoon Kim, Se-Young Yun:
Mitigating Dataset Bias by Using Per-sample Gradient. CoRR abs/2205.15704 (2022) - [i43]Taehyeon Kim, Se-Young Yun:
Supernet Training for Federated Image Classification under System Heterogeneity. CoRR abs/2206.01366 (2022) - [i42]Jongwoo Ko, Bongsoo Yi, Se-Young Yun:
ALASCA: Rethinking Label Smoothing for Deep Learning Under Label Noise. CoRR abs/2206.07277 (2022) - [i41]Jaehoon Oh, Se-Young Yun:
Demystifying the Base and Novel Performances for Few-shot Class-incremental Learning. CoRR abs/2206.10596 (2022) - [i40]Taehyeon Kim, Heesoo Myeong, Se-Young Yun:
Revisiting Architecture-aware Knowledge Distillation: Smaller Models and Faster Search. CoRR abs/2206.13130 (2022) - [i39]Jihwan Oh, Joonkee Kim, Se-Young Yun:
Risk Perspective Exploration in Distributional Reinforcement Learning. CoRR abs/2206.14170 (2022) - [i38]Taehyeon Kim, Namgyu Ho, Donggyu Kim, Se-Young Yun:
Benchmark Dataset for Precipitation Forecasting by Post-Processing the Numerical Weather Prediction. CoRR abs/2206.15241 (2022) - [i37]Mingyu Kim, Jihwan Oh, Yongsik Lee, Joonkee Kim, Seonghwan Kim, Song Chong, Se-Young Yun:
The StarCraft Multi-Agent Challenges+ : Learning of Multi-Stage Tasks and Environmental Factors without Precise Reward Functions. CoRR abs/2207.02007 (2022) - [i36]Yassir Jedra, Junghyun Lee, Alexandre Proutière, Se-Young Yun:
Nearly Optimal Latent State Decoding in Block MDPs. CoRR abs/2208.08480 (2022) - [i35]Jaehoon Oh, Jongwoo Ko, Se-Young Yun:
Synergy with Translation Artifacts for Training and Inference in Multilingual Tasks. CoRR abs/2210.09588 (2022) - [i34]Sumyeong Ahn, Se-Young Yun:
Denoising after Entropy-based Debiasing A Robust Training Method for Dataset Bias with Noisy Labels. CoRR abs/2212.01189 (2022) - [i33]Taehyeon Kim, Shinhwan Kang, Hyeonjeong Shin, Deukryeol Yoon, Seongha Eom, Kijung Shin, Se-Young Yun:
Region-Conditioned Orthogonal 3D U-Net for Weather4Cast Competition. CoRR abs/2212.02059 (2022) - [i32]Gihun Lee, Sangmook Kim, Joonkee Kim, Se-Young Yun:
MEDIAR: Harmony of Data-Centric and Model-Centric for Multi-Modality Microscopy. CoRR abs/2212.03465 (2022) - [i31]Namgyu Ho, Laura Schmid, Se-Young Yun:
Large Language Models Are Reasoning Teachers. CoRR abs/2212.10071 (2022) - 2021
- [j7]Shreyas Sekar, Milan Vojnovic, Se-Young Yun:
A Test Score-Based Approach to Stochastic Submodular Optimization. Manag. Sci. 67(2): 1075-1092 (2021) - [c38]Jaehoon Oh, Hyungjun Yoo, ChangHwan Kim, Se-Young Yun:
BOIL: Towards Representation Change for Few-shot Learning. ICLR 2021 - [c37]Kyoungseok Jang, Kwang-Sung Jun, Se-Young Yun, Wanmo Kang:
Improved Regret Bounds of Bilinear Bandits using Action Space Analysis. ICML 2021: 4744-4754 - [c36]Taehyeon Kim, Jaehoon Oh, Nakyil Kim, Sangwook Cho, Se-Young Yun:
Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation. IJCAI 2021: 2628-2635 - [c35]Aleksandra Gruca, Federico Serva, Llorenç Lliso, Pilar Rípodas, Xavier Calbet, Pedro Herruzo, Jirí Pihrt, Rudolf Raevskiy, Petr Simánek, Matej Choma, Yang Li, Haiyu Dong, Yury Belousov, Sergey Polezhaev, Brian Pulfer, Minseok Seo, Doyi Kim, Seungheon Shin, Eunbin Kim, Sewoong Ahn, Yeji Choi, Jinyoung Park, Minseok Son, Seungju Cho, Inyoung Lee, Changick Kim, Taehyeon Kim, Shinhwan Kang, Hyeonjeong Shin, Deukryeol Yoon, Seongha Eom, Kijung Shin, Se-Young Yun, Bertrand Le Saux, Michael K. Kopp, Sepp Hochreiter, David P. Kreil:
Weather4cast at NeurIPS 2022: Super-Resolution Rain Movie Prediction under Spatio-temporal Shifts. NeurIPS (Competition and Demos) 2021: 292-313 - [c34]Taehyeon Kim, Jongwoo Ko, Sangwook Cho, Jinhwan Choi, Se-Young Yun:
FINE Samples for Learning with Noisy Labels. NeurIPS 2021: 24137-24149 - [i30]Taehyeon Kim, Jongwoo Ko, Jinhwan Choi, Sangwook Cho, Se-Young Yun:
FINE Samples for Learning with Noisy Labels. CoRR abs/2102.11628 (2021) - [i29]