


Остановите войну!
for scientists:


default search action
Yi-Hsuan Yang
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2022
- [j39]Ching-Yu Chiu
, Meinard Müller
, Matthew E. P. Davies
, Alvin Wen-Yu Su, Yi-Hsuan Yang
:
An Analysis Method for Metric-Level Switching in Beat Tracking. IEEE Signal Process. Lett. 29: 2153-2157 (2022) - [c146]Bo-Yu Chen, Wei-Han Hsu, Wei-Hsiang Liao, Marco A. Martínez Ramírez, Yuki Mitsufuji, Yi-Hsuan Yang:
Automatic DJ Transitions with Differentiable Audio Effects and Generative Adversarial Networks. ICASSP 2022: 466-470 - [c145]Yu-Hua Chen, Wen-Yi Hsiao, Tsu-Kuang Hsieh, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Towards Automatic Transcription of Polyphonic Electric Guitar Music: A New Dataset and a Multi-Loss Transformer Model. ICASSP 2022: 786-790 - [c144]Chien-Feng Liao, Jen-Yu Liu, Yi-Hsuan Yang:
KaraSinger: Score-Free Singing Voice Synthesis with VQ-VAE Using Mel-Spectrograms. ICASSP 2022: 956-960 - [i72]Yu-Hua Chen, Wen-Yi Hsiao, Tsu-Kuang Hsieh, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
towards automatic transcription of polyphonic electric guitar music: a new dataset and a multi-loss transformer model. CoRR abs/2202.09907 (2022) - [i71]Da-Yi Wu, Wen-Yi Hsiao, Fu-Rong Yang, Oscar Friedman, Warren Jackson, Scott Bruzenak, Yi-Wen Liu, Yi-Hsuan Yang:
DDSP-based Singing Vocoders: A New Subtractive-based Synthesizer and A Comprehensive Evaluation. CoRR abs/2208.04756 (2022) - [i70]Yen-Tung Yeh, Bo-Yu Chen, Yi-Hsuan Yang:
Exploiting Pre-trained Feature Networks for Generative Adversarial Networks in Audio-domain Loop Generation. CoRR abs/2209.01751 (2022) - [i69]Shih-Lun Wu, Yi-Hsuan Yang:
Compose & Embellish: Well-Structured Piano Performance Generation via A Two-Stage Approach. CoRR abs/2209.08212 (2022) - [i68]Chih-Pin Tan, Alvin W. Y. Su, Yi-Hsuan Yang:
Melody Infilling with User-Provided Structural Context. CoRR abs/2210.02829 (2022) - [i67]Yueh-Kao Wu, Ching-Yu Chiu, Yi-Hsuan Yang:
JukeDrummer: Conditional Beat-aware Audio-domain Drum Accompaniment Generation via Transformer VQ-VAE. CoRR abs/2210.06007 (2022) - [i66]Ching-Yu Chiu, Meinard Müller, Matthew E. P. Davies, Alvin Wen-Yu Su, Yi-Hsuan Yang:
An Analysis Method for Metric-Level Switching in Beat Tracking. CoRR abs/2210.06817 (2022) - 2021
- [j38]Ching-Yu Chiu
, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Drum-Aware Ensemble Architecture for Improved Joint Musical Beat and Downbeat Tracking. IEEE Signal Process. Lett. 28: 1100-1104 (2021) - [j37]Juan Sebastián Gómez Cañón, Estefanía Cano
, Tuomas Eerola, Perfecto Herrera, Xiao Hu, Yi-Hsuan Yang, Emilia Gómez:
Music Emotion Recognition: Toward new, robust standards in personalized and context-sensitive applications. IEEE Signal Process. Mag. 38(6): 106-114 (2021) - [j36]Eva Zangerle
, Chih-Ming Chen, Ming-Feng Tsai
, Yi-Hsuan Yang:
Leveraging Affective Hashtags for Ranking Music Recommendations. IEEE Trans. Affect. Comput. 12(1): 78-91 (2021) - [c143]Wen-Yi Hsiao, Jen-Yu Liu, Yin-Cheng Yeh, Yi-Hsuan Yang:
Compound Word Transformer: Learning to Compose Full-Song Music over Dynamic Directed Hypergraphs. AAAI 2021: 178-186 - [c142]Fu-Rong Yang, Yin-Ping Cho, Yi-Hsuan Yang, Da-Yi Wu, Shan-Hung Wu, Yi-Wen Liu:
Mandarin Singing Voice Synthesis with a Phonology-based Duration Model. APSIPA ASC 2021: 1975-1981 - [c141]Ching-Yu Chiu, Joann Ching, Wen-Yi Hsiao, Yu-Hua Chen, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Source Separation-based Data Augmentation for Improved Joint Beat and Downbeat Tracking. EUSIPCO 2021: 391-395 - [c140]Antoine Liutkus, Ondrej Cífka, Shih-Lun Wu, Umut Simsekli, Yi-Hsuan Yang, Gaël Richard:
Relative Positional Encoding for Transformers with Linear Complexity. ICML 2021: 7067-7079 - [c139]Chin-Jui Chang, Chun-Yi Lee, Yi-Hsuan Yang:
Variable-Length Music Score Infilling via XLNet and Musically Specialized Positional Encoding. ISMIR 2021: 97-104 - [c138]Juan Sebastián Gómez Cañón, Estefanía Cano, Yi-Hsuan Yang, Perfecto Herrera, Emilia Gómez:
Let's agree to disagree: Consensus Entropy Active Learning for Personalized Music Emotion Recognition. ISMIR 2021: 237-245 - [c137]Tun-Min Hung, Bo-Yu Chen, Yen-Tung Yeh, Yi-Hsuan Yang:
A Benchmarking Initiative for Audio-domain Music Generation using the FreeSound Loop Dataset. ISMIR 2021: 310-317 - [c136]Hsiao-Tzu Hung, Joann Ching, Seungheon Doh, Nabin Kim, Juhan Nam, Yi-Hsuan Yang:
EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation. ISMIR 2021: 318-325 - [c135]Pedro Sarmento, Adarsh Kumar, C. J. Carr, Zack Zukowski, Mathieu Barthet, Yi-Hsuan Yang:
DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models. ISMIR 2021: 610-617 - [c134]Yi-Hsuan Yang:
Automatic Music Composition with Transformers. MMArt&ACM@ICMR 2021: 1 - [i65]Wen-Yi Hsiao, Jen-Yu Liu, Yin-Cheng Yeh, Yi-Hsuan Yang:
Compound Word Transformer: Learning to Compose Full-Song Music over Dynamic Directed Hypergraphs. CoRR abs/2101.02402 (2021) - [i64]Shih-Lun Wu, Yi-Hsuan Yang:
MuseMorphose: Full-Song and Fine-Grained Music Style Transfer with Just One Transformer VAE. CoRR abs/2105.04090 (2021) - [i63]Antoine Liutkus, Ondrej Cífka, Shih-Lun Wu, Umut Simsekli, Yi-Hsuan Yang, Gaël Richard:
Relative Positional Encoding for Transformers with Linear Complexity. CoRR abs/2105.08399 (2021) - [i62]Ching-Yu Chiu
, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Drum-Aware Ensemble Architecture for Improved Joint Musical Beat and Downbeat Tracking. CoRR abs/2106.08685 (2021) - [i61]Ching-Yu Chiu
, Joann Ching, Wen-Yi Hsiao, Yu-Hua Chen, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Source Separation-based Data Augmentation for Improved Joint Beat and Downbeat Tracking. CoRR abs/2106.08703 (2021) - [i60]Yi-Hui Chou, I-Chun Chen, Chin-Jui Chang, Joann Ching, Yi-Hsuan Yang:
MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding. CoRR abs/2107.05223 (2021) - [i59]Pedro Sarmento, Adarsh Kumar, C. J. Carr, Zack Zukowski, Mathieu Barthet, Yi-Hsuan Yang:
DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models. CoRR abs/2107.14653 (2021) - [i58]Hsiao-Tzu Hung, Joann Ching, Seungheon Doh, Nabin Kim, Juhan Nam, Yi-Hsuan Yang:
EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation. CoRR abs/2108.01374 (2021) - [i57]Tun-Min Hung, Bo-Yu Chen, Yen-Tung Yeh, Yi-Hsuan Yang:
A Benchmarking Initiative for Audio-Domain Music Generation Using the Freesound Loop Dataset. CoRR abs/2108.01576 (2021) - [i56]Chin-Jui Chang, Chun-Yi Lee, Yi-Hsuan Yang:
Variable-Length Music Score Infilling via XLNet and Musically Specialized Positional Encoding. CoRR abs/2108.05064 (2021) - [i55]Chien-Feng Liao, Jen-Yu Liu, Yi-Hsuan Yang:
KaraSinger: Score-Free Singing Voice Synthesis with VQ-VAE using Mel-spectrograms. CoRR abs/2110.04005 (2021) - [i54]Bo-Yu Chen, Wei-Han Hsu, Wei-Hsiang Liao, Marco A. Martínez Ramírez, Yuki Mitsufuji, Yi-Hsuan Yang:
Automatic DJ Transitions with Differentiable Audio Effects and Generative Adversarial Networks. CoRR abs/2110.06525 (2021) - [i53]Wei-Han Hsu, Bo-Yu Chen, Yi-Hsuan Yang:
Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Features. CoRR abs/2110.08862 (2021) - [i52]Joann Ching, Yi-Hsuan Yang:
Learning To Generate Piano Music With Sustain Pedals. CoRR abs/2111.01216 (2021) - [i51]Yi-Jen Shih, Shih-Lun Wu, Frank Zalkow, Meinard Müller, Yi-Hsuan Yang:
Theme Transformer: Symbolic Music Generation with Theme-Conditioned Transformer. CoRR abs/2111.04093 (2021) - [i50]Chih-Pin Tan, Chin-Jui Chang, Alvin W. Y. Su, Yi-Hsuan Yang:
Music Score Expansion with Variable-Length Infilling. CoRR abs/2111.06046 (2021) - 2020
- [j35]Szu-Yu Chou
, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Fast Tensor Factorization for Large-Scale Context-Aware Recommendation from Implicit Feedback. IEEE Trans. Big Data 6(1): 201-208 (2020) - [j34]Zhe-Cheng Fan
, Tak-Shing T. Chan
, Yi-Hsuan Yang, Jyh-Shing Roger Jang:
Backpropagation With $N$ -D Vector-Valued Neurons Using Arbitrary Bilinear Products. IEEE Trans. Neural Networks Learn. Syst. 31(7): 2638-2652 (2020) - [c133]Tsung-Han Hsieh, Kai-Hsiang Cheng, Zhe-Cheng Fan, Yu-Ching Yang, Yi-Hsuan Yang:
Addressing The Confounds Of Accompaniments In Singer Identification. ICASSP 2020: 1-5 - [c132]Jayneel Parekh, Preeti Rao, Yi-Hsuan Yang:
Speech-To-Singing Conversion in an Encoder-Decoder Framework. ICASSP 2020: 261-265 - [c131]Jianyu Fan, Yi-Hsuan Yang, Kui Dong, Philippe Pasquier
:
A Comparative Study of Western and Chinese Classical Music Based on Soundscape Models. ICASSP 2020: 521-525 - [c130]Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang:
Score and Lyrics-Free Singing Voice Generation. ICCC 2020: 196-203 - [c129]Da-Yi Wu, Yi-Hsuan Yang:
Speech-to-Singing Conversion Based on Boundary Equilibrium GAN. INTERSPEECH 2020: 1316-1320 - [c128]Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang:
Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization. INTERSPEECH 2020: 1997-2001 - [c127]Shih-Lun Wu, Yi-Hsuan Yang:
The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-composed Music through Quantitative Measures. ISMIR 2020: 142-149 - [c126]António Ramires, Frederic Font, Dmitry Bogdanov, Jordan B. L. Smith, Yi-Hsuan Yang, Joann Ching, Bo-Yu Chen, Yueh-Kao Wu, Wei-Han Hsu, Xavier Serra:
The Freesound Loop Dataset and Annotation Tool. ISMIR 2020: 287-294 - [c125]Bo-Yu Chen, Jordan B. L. Smith, Yi-Hsuan Yang:
Neural Loop Combiner: Neural Network Models for Assessing the Compatibility of Loops. ISMIR 2020: 424-431 - [c124]Yu-Hua Chen, Yu-Siang Huang, Wen-Yi Hsiao, Yi-Hsuan Yang:
Automatic Composition of Guitar Tabs by Transformers and Groove Modeling. ISMIR 2020: 756-763 - [c123]Taejun Kim, Minsuk Choi, Evan Sacks, Yi-Hsuan Yang, Juhan Nam:
A Computational Analysis of Real-World DJ Mixes using Mix-To-Track Subsequence Alignment. ISMIR 2020: 764-770 - [c122]Yu-Siang Huang, Yi-Hsuan Yang:
Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions. ACM Multimedia 2020: 1180-1188 - [c121]Ching-Yu Chiu, Wen-Yi Hsiao, Yin-Cheng Yeh, Yi-Hsuan Yang, Alvin Wen-Yu Su:
Mixing-Specific Data Augmentation Techniques for Improved Blind Violin/Piano Source Separation. MMSP 2020: 1-6 - [i49]Yin-Cheng Yeh, Wen-Yi Hsiao, Satoru Fukayama, Tetsuro Kitahara, Benjamin Genchel, Hao-Min Liu, Hao-Wen Dong
, Yian Chen, Terence Leong, Yi-Hsuan Yang:
Automatic Melody Harmonization with Triad Chords: A Comparative Study. CoRR abs/2001.02360 (2020) - [i48]Yu-Siang Huang, Yi-Hsuan Yang:
Pop Music Transformer: Generating Music with Rhythm and Harmony. CoRR abs/2002.00212 (2020) - [i47]Jayneel Parekh, Preeti Rao, Yi-Hsuan Yang:
Speech-to-Singing Conversion in an Encoder-Decoder Framework. CoRR abs/2002.06595 (2020) - [i46]Tsung-Han Hsieh, Kai-Hsiang Cheng, Zhe-Cheng Fan, Yu-Ching Yang, Yi-Hsuan Yang:
Addressing the confounds of accompaniments in singer identification. CoRR abs/2002.06817 (2020) - [i45]Jianyu Fan, Yi-Hsuan Yang, Kui Dong, Philippe Pasquier:
A Comparative Study of Western and Chinese Classical Music based on Soundscape Models. CoRR abs/2002.09021 (2020) - [i44]Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang:
Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization. CoRR abs/2005.08526 (2020) - [i43]Da-Yi Wu, Yi-Hsuan Yang:
Speech-to-Singing Conversion based on Boundary Equilibrium GAN. CoRR abs/2005.13835 (2020) - [i42]Shih-Lun Wu, Yi-Hsuan Yang:
The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-composed Music through Quantitative Measures. CoRR abs/2008.01307 (2020) - [i41]Yu-Hua Chen, Yu-Hsiang Huang, Wen-Yi Hsiao, Yi-Hsuan Yang:
Automatic Composition of Guitar Tabs by Transformers and Groove Modeling. CoRR abs/2008.01431 (2020) - [i40]Bo-Yu Chen, Jordan B. L. Smith, Yi-Hsuan Yang:
Neural Loop Combiner: Neural Network Models for Assessing the Compatibility of Loops. CoRR abs/2008.02011 (2020) - [i39]Ching-Yu Chiu
, Wen-Yi Hsiao, Yin-Cheng Yeh, Yi-Hsuan Yang, Alvin Wen-Yu Su:
Mixing-Specific Data Augmentation Techniques for Improved Blind Violin/Piano Source Separation. CoRR abs/2008.02480 (2020) - [i38]Taejun Kim, Minsuk Choi, Evan Sacks, Yi-Hsuan Yang, Juhan Nam:
A Computational Analysis of Real-World DJ Mixes using Mix-To-Track Subsequence Alignment. CoRR abs/2008.10267 (2020) - [i37]António Ramires, Frederic Font, Dmitry Bogdanov, Jordan B. L. Smith, Yi-Hsuan Yang, Joann Ching, Bo-Yu Chen, Yueh-Kao Wu, Wei-Han Hsu, Xavier Serra:
The Freesound Loop Dataset and Annotation Tool. CoRR abs/2008.11507 (2020)
2010 – 2019
- 2019
- [j33]Juhan Nam
, Keunwoo Choi
, Jongpil Lee
, Szu-Yu Chou, Yi-Hsuan Yang:
Deep Learning for Audio-Based Music Classification and Tagging: Teaching Computers to Distinguish Rock from Bach. IEEE Signal Process. Mag. 36(1): 41-51 (2019) - [j32]Ting-Wei Su
, Yuan-Ping Chen, Li Su, Yi-Hsuan Yang:
TENT: Technique-Embedded Note Tracking for Real-World Guitar Solo Recordings. Trans. Int. Soc. Music. Inf. Retr. 2(1): 15-28 (2019) - [j31]Jen-Yu Liu
, Yi-Hsuan Yang, Shyh-Kang Jeng
:
Weakly-Supervised Visual Instrument-Playing Action Detection in Videos. IEEE Trans. Multim. 21(4): 887-901 (2019) - [c120]Bryan Wang, Yi-Hsuan Yang:
PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network. AAAI 2019: 1174-1181 - [c119]Hsiao-Tzu Hung, Chung-Yang Wang, Yi-Hsuan Yang, Hsin-Min Wang
:
Improving Automatic Jazz Melody Generation by Transfer Learning Techniques. APSIPA 2019: 339-346 - [c118]Frédéric Tamagnan, Yi-Hsuan Yang:
Drum Fills Detection and Generation. CMMR 2019: 91-99 - [c117]Szu-Yu Chou, Kai-Hsiang Cheng, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Learning to Match Transient Sound Events Using Attentional Similarity for Few-shot Sound Recognition. ICASSP 2019: 26-30 - [c116]Tsung-Han Hsieh, Li Su
, Yi-Hsuan Yang:
A Streamlined Encoder/decoder Architecture for Melody Extraction. ICASSP 2019: 156-160 - [c115]Yun-Ning Hung, Yi-An Chen, Yi-Hsuan Yang:
Multitask Learning for Frame-level Instrument Recognition. ICASSP 2019: 381-385 - [c114]Yun-Ning Hung, I-Tung Chiang, Yi-An Chen, Yi-Hsuan Yang:
Musical Composition Style Transfer via Disentangled Timbre Representations. IJCAI 2019: 4697-4703 - [c113]Jen-Yu Liu, Yi-Hsuan Yang:
Dilated Convolution with Dilated GRU for Music Source Separation. IJCAI 2019: 4718-4724 - [c112]Yu-Hua Chen, Bryan Wang, Yi-Hsuan Yang:
Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation. IJCAI 2019: 6506-6508 - [c111]Zhe-Cheng Fan, Tak-Shing Chan
, Yi-Hsuan Yang, Jyh-Shing Roger Jang:
Deep Cyclic Group Networks. IJCNN 2019: 1-8 - [c110]Eva Zangerle, Michael Vötter, Ramona Huber, Yi-Hsuan Yang:
Hit Song Prediction: Leveraging Low- and High-Level Audio Features. ISMIR 2019: 319-326 - [c109]Vibert Thio, Hao-Min Liu, Yin-Cheng Yeh, Yi-Hsuan Yang:
A Minimal Template for Interactive Web-based Demonstrations of Musical Machine Learning. IUI Workshops 2019 - [c108]Hsiao-Tzu Hung, Yu-Hua Chen, Maximilian Mayerl, Michael Vötter, Eva Zangerle, Yi-Hsuan Yang:
MediaEval 2019 Emotion and Theme Recognition task: A VQ-VAE Based Approach. MediaEval 2019 - [c107]Maximilian Mayerl, Michael Vötter, Hsiao-Tzu Hung, Bo-Yu Chen, Yi-Hsuan Yang, Eva Zangerle:
Recognizing Song Mood and Theme Using Convolutional Recurrent Neural Networks. MediaEval 2019 - [c106]Kai-Hsiang Cheng, Szu-Yu Chou, Yi-Hsuan Yang:
Multi-label Few-shot Learning for Sound Event Recognition. MMSP 2019: 1-5 - [c105]Chih-Ming Chen, Chuan-Ju Wang, Ming-Feng Tsai, Yi-Hsuan Yang:
Collaborative Similarity Embedding for Recommender Systems. WWW 2019: 2637-2643 - [i36]Hao-Wen Dong
, Yi-Hsuan Yang:
Towards a Deeper Understanding of Adversarial Losses. CoRR abs/1901.08753 (2019) - [i35]Vibert Thio, Hao-Min Liu, Yin-Cheng Yeh, Yi-Hsuan Yang:
A Minimal Template for Interactive Web-based Demonstrations of Musical Machine Learning. CoRR abs/1902.03722 (2019) - [i34]Chih-Ming Chen, Chuan-Ju Wang, Ming-Feng Tsai, Yi-Hsuan Yang:
Collaborative Similarity Embedding for Recommender Systems. CoRR abs/1902.06188 (2019) - [i33]Yu-Hua Chen, Bryan Wang, Yi-Hsuan Yang:
Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation. CoRR abs/1905.11689 (2019) - [i32]Yun-Ning Hung, I-Tung Chiang, Yi-An Chen, Yi-Hsuan Yang:
Musical Composition Style Transfer via Disentangled Timbre Representations. CoRR abs/1905.13567 (2019) - [i31]Jen-Yu Liu, Yi-Hsuan Yang:
Dilated Convolution with Dilated GRU for Music Source Separation. CoRR abs/1906.01203 (2019) - [i30]Hsiao-Tzu Hung, Chung-Yang Wang, Yi-Hsuan Yang, Hsin-Min Wang:
Improving Automatic Jazz Melody Generation by Transfer Learning Techniques. CoRR abs/1908.09484 (2019) - [i29]Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang:
Score and Lyrics-Free Singing Voice Generation. CoRR abs/1912.11747 (2019) - [i28]Meinard Müller, Emilia Gómez, Yi-Hsuan Yang:
Computational Methods for Melody and Voice Processing in Music Recordings (Dagstuhl Seminar 19052). Dagstuhl Reports 9(1): 125-177 (2019) - 2018
- [j30]Yu-Hao Chin, Jia-Ching Wang, Ju-Chiang Wang, Yi-Hsuan Yang:
Predicting the Probability Density Function of Music Emotion Using Emotion Space Mapping. IEEE Trans. Affect. Comput. 9(4): 541-549 (2018) - [j29]Yu-Siang Huang
, Szu-Yu Chou, Yi-Hsuan Yang:
Pop Music Highlighter: Marking the Emotion Keypoints. Trans. Int. Soc. Music. Inf. Retr. 1(1): 68-78 (2018) - [j28]Jen-Chun Lin
, Wen-Li Wei, Tyng-Luh Liu
, Yi-Hsuan Yang, Hsin-Min Wang
, Hsiao-Rong Tyan, Hong-Yuan Mark Liao:
Coherent Deep-Net Fusion To Classify Shots In Concert Videos. IEEE Trans. Multim. 20(11): 3123-3136 (2018) - [c104]Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, Yi-Hsuan Yang:
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment. AAAI 2018: 34-41 - [c103]Yu-Siang Huang, Szu-Yu Chou, Yi-Hsuan Yang:
Generating Music Medleys via Playing Music Puzzle Games. AAAI 2018: 2281-2288 - [c102]Chia-An Yu, Ching-Lun Tai
, Tak-Shing Chan
, Yi-Hsuan Yang:
Modeling Multi-way Relations with Hypergraph Embedding. CIKM 2018: 1707-1710 - [c101]Wen-Li Wei, Jen-Chun Lin
, Tyng-Luh Liu, Yi-Hsuan Yang, Hsin-Min Wang
, Hsiao-Rong Tyan, Hong-Yuan Mark Liao:
Seethevoice: Learning from Music to Visual Storytelling of Shots. ICME 2018: 1-6 - [c100]Yi-Wei Chen, Yi-Hsuan Yang, Homer H. Chen:
Cross-Cultural Music Emotion Recognition by Adversarial Discriminative Domain Adaptation. ICMLA 2018: 467-472 - [c99]Hao-Min Liu, Yi-Hsuan Yang:
Lead Sheet Generation and Arrangement by Conditional Generative Adversarial Network. ICMLA 2018: 722-727 - [c98]Jen-Yu Liu, Yi-Hsuan Yang:
Denoising Auto-Encoder with Recurrent Skip Connections and Residual Regression for Music Source Separation. ICMLA 2018: 773-778 - [c97]Szu-Yu Chou, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Learning to Recognize Transient Sound Events using Attentional Supervision. IJCAI 2018: 3336-3342 - [c96]Yun-Ning Hung, Yi-Hsuan Yang:
Frame-level Instrument Recognition by Timbre and Pitch. ISMIR 2018: 135-142 - [c95]Hao-Wen Dong, Yi-Hsuan Yang:
Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation. ISMIR 2018: 190-196 - [i27]Tak-Shing T. Chan, Yi-Hsuan Yang:
Polar n-Complex and n-Bicomplex Singular Value Decomposition and Principal Component Pursuit. CoRR abs/1801.03773 (2018) - [i26]Tak-Shing T. Chan, Yi-Hsuan Yang:
Informed Group-Sparse Representation for Singing Voice Separation. CoRR abs/1801.03815 (2018) - [i25]Tak-Shing T. Chan, Yi-Hsuan Yang:
Complex and Quaternionic Principal Component Pursuit and Its Application to Audio Separation. CoRR abs/1801.03816 (2018) - [i24]Yu-Siang Huang, Szu-Yu Chou, Yi-Hsuan Yang:
Pop Music Highlighter: Marking the Emotion Keypoints. CoRR abs/1802.10495 (2018) - [i23]Hao-Wen Dong
, Yi-Hsuan Yang:
Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation. CoRR abs/1804.09399 (2018) - [i22]Jen-Yu Liu, Yi-Hsuan Yang, Shyh-Kang Jeng:
Weakly-supervised Visual Instrument-playing Action Detection in Videos. CoRR abs/1805.02031 (2018) - [i21]Zhe-Cheng Fan, Tak-Shing T. Chan, Yi-Hsuan Yang, Jyh-Shing Roger Jang:
Backpropagation with N-D Vector-Valued Neurons Using Arbitrary Bilinear Products. CoRR abs/1805.09621 (2018) - [i20]Yun-Ning Hung, Yi-Hsuan Yang:
Frame-level Instrument Recognition by Timbre and Pitch. CoRR abs/1806.09587 (2018) - [i19]Jen-Yu Liu, Yi-Hsuan Yang:
Denoising Auto-encoder with Recurrent Skip Connections and Residual Regression for Music Source Separation. CoRR abs/1807.01898 (2018) - [i18]Cheng-Wei Wu, Jen-Yu Liu, Yi-Hsuan Yang, Jyh-Shing Roger Jang:
Singing Style Transfer Using Cycle-Consistent Boundary Equilibrium Generative Adversarial Networks. CoRR abs/1807.02254 (2018) - [i17]Hao-Min Liu, Yi-Hsuan Yang:
Lead Sheet Generation and Arrangement by Conditional Generative Adversarial Network. CoRR abs/1807.11161 (2018) - [i16]Hao-Wen Dong
, Yi-Hsuan Yang:
Training Generative Adversarial Networks with Binary Neurons by End-to-end Backpropagation. CoRR abs/1810.04714 (2018) - [i15]Tsung-Han Hsieh, Li Su, Yi-Hsuan Yang:
A Streamlined Encoder/Decoder Architecture for Melody Extraction. CoRR abs/1810.12947 (2018) - [i14]Yun-Ning Hung, Yi-An Chen, Yi-Hsuan Yang:
Multitask learning for frame-level instrument recognition. CoRR abs/1811.01143 (2018) - [i13]Yun-Ning Hung, Yi-An Chen, Yi-Hsuan Yang:
Learning Disentangled Representations for Timber and Pitch in Music Audio. CoRR abs/1811.03271 (2018) - [i12]Bryan Wang, Yi-Hsuan Yang:
PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network. CoRR abs/1811.04357 (2018) - [i11]