


default search action
Yi-Hsuan Yang
Person information
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j43]Gaël Richard, Vincent Lostanlen
, Yi-Hsuan Yang, Meinard Müller
:
Model-Based Deep Learning for Music Information Research: Leveraging diverse knowledge sources to enhance explainability, controllability, and resource efficiency [Special Issue On Model-Based and Data-Driven Audio Signal Processing]. IEEE Signal Process. Mag. 41(6): 51-59 (2024) - [c153]Chih-Pin Tan
, Shuen-Huei Guan
, Yi-Hsuan Yang
:
PiCoGen: Generate Piano Covers with a Two-stage Approach. ICMR 2024: 1180-1184 - [i88]Yu-Hua Chen, Woosung Choi, Wei-Hsiang Liao, Marco A. Martínez Ramírez, Kin Wai Cheuk, Yuki Mitsufuji, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Improving Unsupervised Clean-to-Rendered Guitar Tone Transformation Using GANs and Integrated Unaligned Clean Data. CoRR abs/2406.15751 (2024) - [i87]Yu-Hua Chen, Yen-Tung Yeh, Yuan-Chiao Cheng, Jui-Te Wu, Yu-Hsiang Ho, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Towards zero-shot amplifier modeling: One-to-many amplifier modeling via tone embedding control. CoRR abs/2407.10646 (2024) - [i86]Yun-Han Lan, Wen-Yi Hsiao, Hao-Chung Cheng, Yi-Hsuan Yang:
MusiConGen: Rhythm and Chord Control for Transformer-Based Text-to-Music Generation. CoRR abs/2407.15060 (2024) - [i85]Fang-Duo Tsai, Shih-Lun Wu, Haven Kim, Bo-Yu Chen, Hao-Chung Cheng, Yi-Hsuan Yang:
Audio Prompt Adapter: Unleashing Music Editing Abilities for Text-to-Music with Lightweight Finetuning. CoRR abs/2407.16564 (2024) - [i84]Ying-Shuo Lee, Yueh-Po Peng, Jui-Te Wu, Ming Cheng, Li Su, Yi-Hsuan Yang:
Distortion Recovery: A Two-Stage Method for Guitar Effect Removal. CoRR abs/2407.16639 (2024) - [i83]Jingyue Huang, Yi-Hsuan Yang:
Emotion-Driven Melody Harmonization via Melodic Variation and Functional Representation. CoRR abs/2407.20176 (2024) - [i82]Chih-Pin Tan, Shuen-Huei Guan, Yi-Hsuan Yang:
PiCoGen: Generate Piano Covers with a Two-stage Approach. CoRR abs/2407.20883 (2024) - [i81]Jingyue Huang, Ke Chen, Yi-Hsuan Yang:
Emotion-driven Piano Music Generation via Two-stage Disentanglement and Functional Representation. CoRR abs/2407.20955 (2024) - [i80]Chih-Pin Tan, Hsin Ai, Yi-Hsin Chang, Shuen-Huei Guan, Yi-Hsuan Yang:
PiCoGen2: Piano cover generation with transfer learning approach and weakly aligned data. CoRR abs/2408.01551 (2024) - [i79]Yen-Tung Yeh, Wen-Yi Hsiao, Yi-Hsuan Yang:
Hyper Recurrent Neural Network: Condition Mechanisms for Black-box Audio Effect Modeling. CoRR abs/2408.04829 (2024) - [i78]Yen-Tung Yeh, Wen-Yi Hsiao, Yi-Hsuan Yang:
PyNeuralFx: A Python Package for Neural Audio Effect Modeling. CoRR abs/2408.06053 (2024) - [i77]Yen-Tung Yeh, Yu-Hua Chen, Yuan-Chiao Cheng, Jui-Te Wu, Jun-Jie Fu, Yi-Fan Yeh, Yi-Hsuan Yang:
DDSP Guitar Amp: Interpretable Guitar Amplifier Modeling. CoRR abs/2408.11405 (2024) - [i76]Dinh-Viet-Toan Le, Yi-Hsuan Yang:
METEOR: Melody-aware Texture-controllable Symbolic Orchestral Music Generation. CoRR abs/2409.11753 (2024) - [i75]Yu-Hua Chen, Yuan-Chiao Cheng, Yen-Tung Yeh, Jui-Te Wu, Yu-Hsiang Ho, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Demo of Zero-Shot Guitar Amplifier Modelling: Enhancing Modeling with Hyper Neural Networks. CoRR abs/2410.04702 (2024) - [i74]Chon-In Leong, I-Ling Chung, Kin-Fong Chao, Jun-You Wang, Yi-Hsuan Yang, Jyh-Shing Roger Jang:
Music2Fail: Transfer Music to Failed Recorder Style. CoRR abs/2411.18075 (2024) - 2023
- [j42]Shih-Lun Wu
, Yi-Hsuan Yang
:
MuseMorphose: Full-Song and Fine-Grained Piano Music Style Transfer With One Transformer VAE. IEEE ACM Trans. Audio Speech Lang. Process. 31: 1953-1967 (2023) - [j41]Ching-Yu Chiu
, Meinard Müller
, Matthew E. P. Davies
, Alvin Wen-Yu Su, Yi-Hsuan Yang
:
Local Periodicity-Based Beat Tracking for Expressive Classical Piano Music. IEEE ACM Trans. Audio Speech Lang. Process. 31: 2824-2835 (2023) - [j40]Yi-Jen Shih
, Shih-Lun Wu
, Frank Zalkow
, Meinard Müller
, Yi-Hsuan Yang
:
Theme Transformer: Symbolic Music Generation With Theme-Conditioned Transformer. IEEE Trans. Multim. 25: 3495-3508 (2023) - [c152]Shih-Lun Wu, Yi-Hsuan Yang:
Compose & Embellish: Well-Structured Piano Performance Generation via A Two-Stage Approach. ICASSP 2023: 1-5 - [i73]Ching-Yu Chiu, Meinard Müller, Matthew E. P. Davies, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Local Periodicity-Based Beat Tracking for Expressive Classical Piano Music. CoRR abs/2308.10355 (2023) - 2022
- [j39]Ching-Yu Chiu
, Meinard Müller
, Matthew E. P. Davies
, Alvin Wen-Yu Su, Yi-Hsuan Yang
:
An Analysis Method for Metric-Level Switching in Beat Tracking. IEEE Signal Process. Lett. 29: 2153-2157 (2022) - [c151]Bo-Yu Chen, Wei-Han Hsu, Wei-Hsiang Liao, Marco A. Martínez Ramírez, Yuki Mitsufuji, Yi-Hsuan Yang:
Automatic DJ Transitions with Differentiable Audio Effects and Generative Adversarial Networks. ICASSP 2022: 466-470 - [c150]Yu-Hua Chen, Wen-Yi Hsiao, Tsu-Kuang Hsieh, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Towards Automatic Transcription of Polyphonic Electric Guitar Music: A New Dataset and a Multi-Loss Transformer Model. ICASSP 2022: 786-790 - [c149]Chien-Feng Liao, Jen-Yu Liu, Yi-Hsuan Yang:
KaraSinger: Score-Free Singing Voice Synthesis with VQ-VAE Using Mel-Spectrograms. ICASSP 2022: 956-960 - [c148]Da-Yi Wu, Wen-Yi Hsiao, Fu-Rong Yang, Oscar Friedman, Warren Jackson, Scott Bruzenak, Yi-Wen Liu, Yi-Hsuan Yang:
DDSP-based Singing Vocoders: A New Subtractive-based Synthesizer and A Comprehensive Evaluation. ISMIR 2022: 76-83 - [c147]Yen-Tung Yeh, Yi-Hsuan Yang, Bo-Yu Chen:
Exploiting Pre-trained Feature Networks for Generative Adversarial Networks in Audio-domain Loop Generation. ISMIR 2022: 132-140 - [c146]Yueh-Kao Wu, Ching-Yu Chiu, Yi-Hsuan Yang:
Jukedrummer: Conditional Beat-aware Audio-domain Drum Accompaniment Generation via Transformer VQ-VAE. ISMIR 2022: 193-200 - [c145]Chih-Pin Tan, Alvin W. Y. Su, Yi-Hsuan Yang:
Melody Infilling with User-Provided Structural Context. ISMIR 2022: 834-841 - [i72]Yu-Hua Chen, Wen-Yi Hsiao, Tsu-Kuang Hsieh, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
towards automatic transcription of polyphonic electric guitar music: a new dataset and a multi-loss transformer model. CoRR abs/2202.09907 (2022) - [i71]Da-Yi Wu, Wen-Yi Hsiao, Fu-Rong Yang, Oscar Friedman, Warren Jackson, Scott Bruzenak, Yi-Wen Liu, Yi-Hsuan Yang:
DDSP-based Singing Vocoders: A New Subtractive-based Synthesizer and A Comprehensive Evaluation. CoRR abs/2208.04756 (2022) - [i70]Yen-Tung Yeh, Bo-Yu Chen, Yi-Hsuan Yang:
Exploiting Pre-trained Feature Networks for Generative Adversarial Networks in Audio-domain Loop Generation. CoRR abs/2209.01751 (2022) - [i69]Shih-Lun Wu, Yi-Hsuan Yang:
Compose & Embellish: Well-Structured Piano Performance Generation via A Two-Stage Approach. CoRR abs/2209.08212 (2022) - [i68]Chih-Pin Tan, Alvin W. Y. Su, Yi-Hsuan Yang:
Melody Infilling with User-Provided Structural Context. CoRR abs/2210.02829 (2022) - [i67]Yueh-Kao Wu, Ching-Yu Chiu, Yi-Hsuan Yang:
JukeDrummer: Conditional Beat-aware Audio-domain Drum Accompaniment Generation via Transformer VQ-VAE. CoRR abs/2210.06007 (2022) - [i66]Ching-Yu Chiu, Meinard Müller, Matthew E. P. Davies, Alvin Wen-Yu Su, Yi-Hsuan Yang:
An Analysis Method for Metric-Level Switching in Beat Tracking. CoRR abs/2210.06817 (2022) - 2021
- [j38]Ching-Yu Chiu
, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Drum-Aware Ensemble Architecture for Improved Joint Musical Beat and Downbeat Tracking. IEEE Signal Process. Lett. 28: 1100-1104 (2021) - [j37]Juan Sebastián Gómez Cañón
, Estefanía Cano
, Tuomas Eerola, Perfecto Herrera, Xiao Hu, Yi-Hsuan Yang, Emilia Gómez:
Music Emotion Recognition: Toward new, robust standards in personalized and context-sensitive applications. IEEE Signal Process. Mag. 38(6): 106-114 (2021) - [j36]Eva Zangerle
, Chih-Ming Chen, Ming-Feng Tsai
, Yi-Hsuan Yang:
Leveraging Affective Hashtags for Ranking Music Recommendations. IEEE Trans. Affect. Comput. 12(1): 78-91 (2021) - [c144]Wen-Yi Hsiao, Jen-Yu Liu, Yin-Cheng Yeh, Yi-Hsuan Yang:
Compound Word Transformer: Learning to Compose Full-Song Music over Dynamic Directed Hypergraphs. AAAI 2021: 178-186 - [c143]Fu-Rong Yang, Yin-Ping Cho, Yi-Hsuan Yang, Da-Yi Wu, Shan-Hung Wu, Yi-Wen Liu:
Mandarin Singing Voice Synthesis with a Phonology-based Duration Model. APSIPA ASC 2021: 1975-1981 - [c142]Ching-Yu Chiu, Joann Ching, Wen-Yi Hsiao, Yu-Hua Chen, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Source Separation-based Data Augmentation for Improved Joint Beat and Downbeat Tracking. EUSIPCO 2021: 391-395 - [c141]Antoine Liutkus, Ondrej Cífka, Shih-Lun Wu, Umut Simsekli, Yi-Hsuan Yang, Gaël Richard:
Relative Positional Encoding for Transformers with Linear Complexity. ICML 2021: 7067-7079 - [c140]Chin-Jui Chang, Chun-Yi Lee, Yi-Hsuan Yang:
Variable-Length Music Score Infilling via XLNet and Musically Specialized Positional Encoding. ISMIR 2021: 97-104 - [c139]Juan Sebastián Gómez Cañón, Estefanía Cano, Yi-Hsuan Yang, Perfecto Herrera, Emilia Gómez:
Let's agree to disagree: Consensus Entropy Active Learning for Personalized Music Emotion Recognition. ISMIR 2021: 237-245 - [c138]Tun-Min Hung, Bo-Yu Chen, Yen-Tung Yeh, Yi-Hsuan Yang:
A Benchmarking Initiative for Audio-domain Music Generation using the FreeSound Loop Dataset. ISMIR 2021: 310-317 - [c137]Hsiao-Tzu Hung, Joann Ching, Seungheon Doh, Nabin Kim, Juhan Nam, Yi-Hsuan Yang:
EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation. ISMIR 2021: 318-325 - [c136]Pedro Sarmento, Adarsh Kumar, CJ Carr, Zack Zukowski, Mathieu Barthet, Yi-Hsuan Yang:
DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models. ISMIR 2021: 610-617 - [c135]Yi-Hsuan Yang:
Automatic Music Composition with Transformers. MMArt&ACM@ICMR 2021: 1 - [c134]Taejun Kim, Yi-Hsuan Yang, Juhan Nam:
Reverse-Engineering The Transition Regions of Real-World DJ Mixes using Sub-band Analysis with Convex Optimization. NIME 2021 - [i65]Wen-Yi Hsiao, Jen-Yu Liu, Yin-Cheng Yeh, Yi-Hsuan Yang:
Compound Word Transformer: Learning to Compose Full-Song Music over Dynamic Directed Hypergraphs. CoRR abs/2101.02402 (2021) - [i64]Shih-Lun Wu, Yi-Hsuan Yang:
MuseMorphose: Full-Song and Fine-Grained Music Style Transfer with Just One Transformer VAE. CoRR abs/2105.04090 (2021) - [i63]Antoine Liutkus, Ondrej Cífka
, Shih-Lun Wu, Umut Simsekli, Yi-Hsuan Yang, Gaël Richard:
Relative Positional Encoding for Transformers with Linear Complexity. CoRR abs/2105.08399 (2021) - [i62]Ching-Yu Chiu
, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Drum-Aware Ensemble Architecture for Improved Joint Musical Beat and Downbeat Tracking. CoRR abs/2106.08685 (2021) - [i61]Ching-Yu Chiu
, Joann Ching, Wen-Yi Hsiao, Yu-Hua Chen, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Source Separation-based Data Augmentation for Improved Joint Beat and Downbeat Tracking. CoRR abs/2106.08703 (2021) - [i60]Yi-Hui Chou, I-Chun Chen, Chin-Jui Chang, Joann Ching, Yi-Hsuan Yang:
MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding. CoRR abs/2107.05223 (2021) - [i59]Pedro Sarmento, Adarsh Kumar, CJ Carr, Zack Zukowski, Mathieu Barthet, Yi-Hsuan Yang:
DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models. CoRR abs/2107.14653 (2021) - [i58]Hsiao-Tzu Hung, Joann Ching, Seungheon Doh, Nabin Kim, Juhan Nam, Yi-Hsuan Yang:
EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation. CoRR abs/2108.01374 (2021) - [i57]Tun-Min Hung, Bo-Yu Chen, Yen-Tung Yeh, Yi-Hsuan Yang:
A Benchmarking Initiative for Audio-Domain Music Generation Using the Freesound Loop Dataset. CoRR abs/2108.01576 (2021) - [i56]Chin-Jui Chang, Chun-Yi Lee, Yi-Hsuan Yang:
Variable-Length Music Score Infilling via XLNet and Musically Specialized Positional Encoding. CoRR abs/2108.05064 (2021) - [i55]Chien-Feng Liao, Jen-Yu Liu, Yi-Hsuan Yang:
KaraSinger: Score-Free Singing Voice Synthesis with VQ-VAE using Mel-spectrograms. CoRR abs/2110.04005 (2021) - [i54]Bo-Yu Chen, Wei-Han Hsu, Wei-Hsiang Liao, Marco A. Martínez Ramírez, Yuki Mitsufuji, Yi-Hsuan Yang:
Automatic DJ Transitions with Differentiable Audio Effects and Generative Adversarial Networks. CoRR abs/2110.06525 (2021) - [i53]Wei-Han Hsu, Bo-Yu Chen, Yi-Hsuan Yang:
Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Features. CoRR abs/2110.08862 (2021) - [i52]Joann Ching, Yi-Hsuan Yang:
Learning To Generate Piano Music With Sustain Pedals. CoRR abs/2111.01216 (2021) - [i51]Yi-Jen Shih, Shih-Lun Wu, Frank Zalkow, Meinard Müller, Yi-Hsuan Yang:
Theme Transformer: Symbolic Music Generation with Theme-Conditioned Transformer. CoRR abs/2111.04093 (2021) - [i50]Chih-Pin Tan, Chin-Jui Chang, Alvin W. Y. Su, Yi-Hsuan Yang:
Music Score Expansion with Variable-Length Infilling. CoRR abs/2111.06046 (2021) - 2020
- [j35]Szu-Yu Chou
, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Fast Tensor Factorization for Large-Scale Context-Aware Recommendation from Implicit Feedback. IEEE Trans. Big Data 6(1): 201-208 (2020) - [j34]Zhe-Cheng Fan
, Tak-Shing T. Chan
, Yi-Hsuan Yang, Jyh-Shing Roger Jang:
Backpropagation With $N$ -D Vector-Valued Neurons Using Arbitrary Bilinear Products. IEEE Trans. Neural Networks Learn. Syst. 31(7): 2638-2652 (2020) - [c133]Tsung-Han Hsieh, Kai-Hsiang Cheng, Zhe-Cheng Fan, Yu-Ching Yang, Yi-Hsuan Yang:
Addressing The Confounds Of Accompaniments In Singer Identification. ICASSP 2020: 1-5 - [c132]Jayneel Parekh, Preeti Rao, Yi-Hsuan Yang:
Speech-To-Singing Conversion in an Encoder-Decoder Framework. ICASSP 2020: 261-265 - [c131]Jianyu Fan, Yi-Hsuan Yang, Kui Dong, Philippe Pasquier
:
A Comparative Study of Western and Chinese Classical Music Based on Soundscape Models. ICASSP 2020: 521-525 - [c130]Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang:
Score and Lyrics-Free Singing Voice Generation. ICCC 2020: 196-203 - [c129]Da-Yi Wu, Yi-Hsuan Yang:
Speech-to-Singing Conversion Based on Boundary Equilibrium GAN. INTERSPEECH 2020: 1316-1320 - [c128]Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang:
Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization. INTERSPEECH 2020: 1997-2001 - [c127]Shih-Lun Wu, Yi-Hsuan Yang:
The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-composed Music through Quantitative Measures. ISMIR 2020: 142-149 - [c126]António Ramires, Frederic Font, Dmitry Bogdanov, Jordan B. L. Smith, Yi-Hsuan Yang, Joann Ching, Bo-Yu Chen, Yueh-Kao Wu, Wei-Han Hsu, Xavier Serra:
The Freesound Loop Dataset and Annotation Tool. ISMIR 2020: 287-294 - [c125]Bo-Yu Chen, Jordan B. L. Smith, Yi-Hsuan Yang:
Neural Loop Combiner: Neural Network Models for Assessing the Compatibility of Loops. ISMIR 2020: 424-431 - [c124]Yu-Hua Chen, Yu-Siang Huang, Wen-Yi Hsiao, Yi-Hsuan Yang:
Automatic Composition of Guitar Tabs by Transformers and Groove Modeling. ISMIR 2020: 756-763 - [c123]Taejun Kim, Minsuk Choi, Evan Sacks, Yi-Hsuan Yang, Juhan Nam:
A Computational Analysis of Real-World DJ Mixes using Mix-To-Track Subsequence Alignment. ISMIR 2020: 764-770 - [c122]Yu-Siang Huang, Yi-Hsuan Yang:
Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions. ACM Multimedia 2020: 1180-1188 - [c121]Ching-Yu Chiu, Wen-Yi Hsiao, Yin-Cheng Yeh, Yi-Hsuan Yang, Alvin Wen-Yu Su:
Mixing-Specific Data Augmentation Techniques for Improved Blind Violin/Piano Source Separation. MMSP 2020: 1-6 - [i49]Yin-Cheng Yeh, Wen-Yi Hsiao, Satoru Fukayama, Tetsuro Kitahara, Benjamin Genchel, Hao-Min Liu, Hao-Wen Dong
, Yian Chen, Terence Leong, Yi-Hsuan Yang:
Automatic Melody Harmonization with Triad Chords: A Comparative Study. CoRR abs/2001.02360 (2020) - [i48]Yu-Siang Huang, Yi-Hsuan Yang:
Pop Music Transformer: Generating Music with Rhythm and Harmony. CoRR abs/2002.00212 (2020) - [i47]Jayneel Parekh, Preeti Rao, Yi-Hsuan Yang:
Speech-to-Singing Conversion in an Encoder-Decoder Framework. CoRR abs/2002.06595 (2020) - [i46]Tsung-Han Hsieh, Kai-Hsiang Cheng, Zhe-Cheng Fan, Yu-Ching Yang, Yi-Hsuan Yang:
Addressing the confounds of accompaniments in singer identification. CoRR abs/2002.06817 (2020) - [i45]Jianyu Fan, Yi-Hsuan Yang, Kui Dong, Philippe Pasquier:
A Comparative Study of Western and Chinese Classical Music based on Soundscape Models. CoRR abs/2002.09021 (2020) - [i44]Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang:
Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization. CoRR abs/2005.08526 (2020) - [i43]Da-Yi Wu, Yi-Hsuan Yang:
Speech-to-Singing Conversion based on Boundary Equilibrium GAN. CoRR abs/2005.13835 (2020) - [i42]Shih-Lun Wu, Yi-Hsuan Yang:
The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-composed Music through Quantitative Measures. CoRR abs/2008.01307 (2020) - [i41]Yu-Hua Chen, Yu-Hsiang Huang, Wen-Yi Hsiao, Yi-Hsuan Yang:
Automatic Composition of Guitar Tabs by Transformers and Groove Modeling. CoRR abs/2008.01431 (2020) - [i40]Bo-Yu Chen, Jordan B. L. Smith, Yi-Hsuan Yang:
Neural Loop Combiner: Neural Network Models for Assessing the Compatibility of Loops. CoRR abs/2008.02011 (2020) - [i39]Ching-Yu Chiu
, Wen-Yi Hsiao, Yin-Cheng Yeh, Yi-Hsuan Yang, Alvin Wen-Yu Su:
Mixing-Specific Data Augmentation Techniques for Improved Blind Violin/Piano Source Separation. CoRR abs/2008.02480 (2020) - [i38]Taejun Kim, Minsuk Choi, Evan Sacks, Yi-Hsuan Yang, Juhan Nam:
A Computational Analysis of Real-World DJ Mixes using Mix-To-Track Subsequence Alignment. CoRR abs/2008.10267 (2020) - [i37]António Ramires, Frederic Font, Dmitry Bogdanov, Jordan B. L. Smith, Yi-Hsuan Yang, Joann Ching, Bo-Yu Chen, Yueh-Kao Wu, Wei-Han Hsu, Xavier Serra:
The Freesound Loop Dataset and Annotation Tool. CoRR abs/2008.11507 (2020)
2010 – 2019
- 2019
- [j33]Juhan Nam
, Keunwoo Choi
, Jongpil Lee
, Szu-Yu Chou, Yi-Hsuan Yang:
Deep Learning for Audio-Based Music Classification and Tagging: Teaching Computers to Distinguish Rock from Bach. IEEE Signal Process. Mag. 36(1): 41-51 (2019) - [j32]Ting-Wei Su
, Yuan-Ping Chen, Li Su, Yi-Hsuan Yang:
TENT: Technique-Embedded Note Tracking for Real-World Guitar Solo Recordings. Trans. Int. Soc. Music. Inf. Retr. 2(1): 15-28 (2019) - [j31]Jen-Yu Liu
, Yi-Hsuan Yang, Shyh-Kang Jeng
:
Weakly-Supervised Visual Instrument-Playing Action Detection in Videos. IEEE Trans. Multim. 21(4): 887-901 (2019) - [c120]Bryan Wang, Yi-Hsuan Yang:
PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network. AAAI 2019: 1174-1181 - [c119]Hsiao-Tzu Hung, Chung-Yang Wang, Yi-Hsuan Yang, Hsin-Min Wang
:
Improving Automatic Jazz Melody Generation by Transfer Learning Techniques. APSIPA 2019: 339-346 - [c118]Frédéric Tamagnan
, Yi-Hsuan Yang:
Drum Fills Detection and Generation. CMMR 2019: 91-99 - [c117]Szu-Yu Chou, Kai-Hsiang Cheng, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Learning to Match Transient Sound Events Using Attentional Similarity for Few-shot Sound Recognition. ICASSP 2019: 26-30 - [c116]Tsung-Han Hsieh, Li Su
, Yi-Hsuan Yang:
A Streamlined Encoder/decoder Architecture for Melody Extraction. ICASSP 2019: 156-160 - [c115]Yun-Ning Hung, Yi-An Chen, Yi-Hsuan Yang:
Multitask Learning for Frame-level Instrument Recognition. ICASSP 2019: 381-385 - [c114]Yun-Ning Hung, I-Tung Chiang, Yi-An Chen, Yi-Hsuan Yang:
Musical Composition Style Transfer via Disentangled Timbre Representations. IJCAI 2019: 4697-4703 - [c113]Jen-Yu Liu, Yi-Hsuan Yang:
Dilated Convolution with Dilated GRU for Music Source Separation. IJCAI 2019: 4718-4724 - [c112]Yu-Hua Chen, Bryan Wang, Yi-Hsuan Yang:
Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation. IJCAI 2019: 6506-6508 - [c111]Zhe-Cheng Fan, Tak-Shing Chan
, Yi-Hsuan Yang, Jyh-Shing Roger Jang:
Deep Cyclic Group Networks. IJCNN 2019: 1-8 - [c110]Eva Zangerle, Michael Vötter, Ramona Huber, Yi-Hsuan Yang:
Hit Song Prediction: Leveraging Low- and High-Level Audio Features. ISMIR 2019: 319-326 - [c109]Vibert Thio, Hao-Min Liu, Yin-Cheng Yeh, Yi-Hsuan Yang:
A Minimal Template for Interactive Web-based Demonstrations of Musical Machine Learning. IUI Workshops 2019 - [c108]Hsiao-Tzu Hung, Yu-Hua Chen, Maximilian Mayerl, Michael Vötter, Eva Zangerle, Yi-Hsuan Yang:
MediaEval 2019 Emotion and Theme Recognition task: A VQ-VAE Based Approach. MediaEval 2019 - [c107]Maximilian Mayerl, Michael Vötter, Hsiao-Tzu Hung, Bo-Yu Chen, Yi-Hsuan Yang, Eva Zangerle:
Recognizing Song Mood and Theme Using Convolutional Recurrent Neural Networks. MediaEval 2019 - [c106]Kai-Hsiang Cheng, Szu-Yu Chou, Yi-Hsuan Yang:
Multi-label Few-shot Learning for Sound Event Recognition. MMSP 2019: 1-5 - [c105]Chih-Ming Chen, Chuan-Ju Wang, Ming-Feng Tsai, Yi-Hsuan Yang:
Collaborative Similarity Embedding for Recommender Systems. WWW 2019: 2637-2643 - [i36]Hao-Wen Dong
, Yi-Hsuan Yang:
Towards a Deeper Understanding of Adversarial Losses. CoRR abs/1901.08753 (2019) - [i35]Vibert Thio, Hao-Min Liu, Yin-Cheng Yeh, Yi-Hsuan Yang:
A Minimal Template for Interactive Web-based Demonstrations of Musical Machine Learning. CoRR abs/1902.03722 (2019) - [i34]Chih-Ming Chen, Chuan-Ju Wang, Ming-Feng Tsai, Yi-Hsuan Yang:
Collaborative Similarity Embedding for Recommender Systems. CoRR abs/1902.06188 (2019) - [i33]Yu-Hua Chen, Bryan Wang, Yi-Hsuan Yang:
Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation. CoRR abs/1905.11689 (2019) - [i32]Yun-Ning Hung, I-Tung Chiang, Yi-An Chen, Yi-Hsuan Yang:
Musical Composition Style Transfer via Disentangled Timbre Representations. CoRR abs/1905.13567 (2019) - [i31]Jen-Yu Liu, Yi-Hsuan Yang:
Dilated Convolution with Dilated GRU for Music Source Separation. CoRR abs/1906.01203 (2019) - [i30]Hsiao-Tzu Hung, Chung-Yang Wang, Yi-Hsuan Yang, Hsin-Min Wang:
Improving Automatic Jazz Melody Generation by Transfer Learning Techniques. CoRR abs/1908.09484 (2019) - [i29]Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang:
Score and Lyrics-Free Singing Voice Generation. CoRR abs/1912.11747 (2019) - [i28]