default search action
Bernd Bischl
Person information
- affiliation: LMU Munich, Department of Statistics, Germany
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j52]Simon Wiegrebe, Philipp Kopper, Raphael Sonabend, Bernd Bischl, Andreas Bender:
Deep learning for survival analysis: a review. Artif. Intell. Rev. 57(3): 65 (2024) - [j51]Christoph Molnar, Gunnar König, Bernd Bischl, Giuseppe Casalicchio:
Model-agnostic feature importance and effects with dependent features: a conditional subgroup approach. Data Min. Knowl. Discov. 38(5): 2903-2941 (2024) - [j50]Christian A. Scholbeck, Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl, Christian Heumann:
Marginal effects for non-linear prediction functions. Data Min. Knowl. Discov. 38(5): 2997-3042 (2024) - [j49]Christian A. Scholbeck, Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl, Christian Heumann:
Correction: Marginal effects for non-linear prediction functions. Data Min. Knowl. Discov. 38(6): 4234-4235 (2024) - [j48]Hilde J. P. Weerts, Florian Pfisterer, Matthias Feurer, Katharina Eggensperger, Edward Bergman, Noor H. Awad, Joaquin Vanschoren, Mykola Pechenizkiy, Bernd Bischl, Frank Hutter:
Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML. J. Artif. Intell. Res. 79: 639-677 (2024) - [j47]Pieter Gijsbers, Marcos L. P. Bueno, Stefan Coors, Erin LeDell, Sébastien Poirier, Janek Thomas, Bernd Bischl, Joaquin Vanschoren:
AMLB: an AutoML Benchmark. J. Mach. Learn. Res. 25: 101:1-101:65 (2024) - [j46]Felix Ott, Lucas Heublein, David Rügamer, Bernd Bischl, Christopher Mutschler:
Fusing structure from motion and simulation-augmented pose regression from optical flow for challenging indoor environments. J. Vis. Commun. Image Represent. 103: 104256 (2024) - [j45]Daniel Schalk, Bernd Bischl, David Rügamer:
Privacy-preserving and lossless distributed estimation of high-dimensional generalized additive mixed models. Stat. Comput. 34(1): 31 (2024) - [c103]Amirhossein Vahidi, Simon Schoßer, Lisa Wimmer, Yawei Li, Bernd Bischl, Eyke Hüllermeier, Mina Rezaei:
Probabilistic Self-supervised Representation Learning via Scoring Rules Minimization. ICLR 2024 - [c102]Moritz Herrmann, F. Julian D. Lange, Katharina Eggensperger, Giuseppe Casalicchio, Marcel Wever, Matthias Feurer, David Rügamer, Eyke Hüllermeier, Anne-Laure Boulesteix, Bernd Bischl:
Position: Why We Must Rethink Empirical Research in Machine Learning. ICML 2024 - [c101]Marius Lindauer, Florian Karl, Anne Klier, Julia Moosbauer, Alexander Tornede, Andreas Müller, Frank Hutter, Matthias Feurer, Bernd Bischl:
Position: A Call to Action for a Human-Centered AutoML Paradigm. ICML 2024 - [c100]Emanuel Sommer, Lisa Wimmer, Theodore Papamarkou, Ludwig Bothmann, Bernd Bischl, David Rügamer:
Connecting the Dots: Is Mode-Connectedness the Key to Feasible Sample-Based Inference in Bayesian Neural Networks? ICML 2024 - [c99]Jonas Gregor Wiese, Lisa Wimmer, Theodore Papamarkou, Bernd Bischl, Stephan Günnemann, David Rügamer:
Towards Efficient MCMC Sampling in Bayesian Neural Networks by Exploiting Symmetry (Extended Abstract). IJCAI 2024: 8466-8470 - [c98]Amihossein Vahidi, Lisa Wimmer, Hüseyin Anil Gündüz, Bernd Bischl, Eyke Hüllermeier, Mina Rezaei:
Diversified Ensemble of Independent Sub-networks for Robust Self-supervised Representation Learning. ECML/PKDD (1) 2024: 38-55 - [c97]Fabian Stermann, Ilias Chalkidis, Amihossein Vahidi, Bernd Bischl, Mina Rezaei:
Attention-Driven Dropout: A Simple Method to Improve Self-supervised Contrastive Sentence Embeddings. ECML/PKDD (1) 2024: 89-106 - [c96]Hubert Baniecki, Giuseppe Casalicchio, Bernd Bischl, Przemyslaw Biecek:
On the Robustness of Global Feature Effect Explanations. ECML/PKDD (2) 2024: 125-142 - [c95]Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer:
Constrained Probabilistic Mask Learning for Task-specific Undersampled MRI Reconstruction. WACV 2024: 7650-7659 - [c94]Susanne Dandl, Kristin Blesch, Timo Freiesleben, Gunnar König, Jan Kapar, Bernd Bischl, Marvin N. Wright:
CountARFactuals - Generating Plausible Model-Agnostic Counterfactual Explanations with Adversarial Random Forests. xAI (3) 2024: 85-107 - [c93]Susanne Dandl, Marc Becker, Bernd Bischl, Giuseppe Casalicchio, Ludwig Bothmann:
mlr3summary: Concise and interpretable summaries for machine learning models. xAI (Late-breaking Work, Demos, Doctoral Consortium) 2024: 281-288 - [c92]Fiona Katharina Ewald, Ludwig Bothmann, Marvin N. Wright, Bernd Bischl, Giuseppe Casalicchio, Gunnar König:
A Guide to Feature Importance Methods for Scientific Inference. xAI (2) 2024: 440-464 - [i120]Emanuel Sommer, Lisa Wimmer, Theodore Papamarkou, Ludwig Bothmann, Bernd Bischl, David Rügamer:
Connecting the Dots: Is Mode-Connectedness the Key to Feasible Sample-Based Inference in Bayesian Neural Networks? CoRR abs/2402.01484 (2024) - [i119]Julian Rodemann, Federico Croppi, Philipp Arens, Yusuf Sale, Julia Herbinger, Bernd Bischl, Eyke Hüllermeier, Thomas Augustin, Conor J. Walsh, Giuseppe Casalicchio:
Explaining Bayesian Optimization by Shapley Values Facilitates Human-AI Collaboration. CoRR abs/2403.04629 (2024) - [i118]Philipp Kopper, David Rügamer, Raphael Sonabend, Bernd Bischl, Andreas Bender:
Training Survival Models using Scoring Rules. CoRR abs/2403.13150 (2024) - [i117]Vasilis Gkolemis, Christos Diou, Eirini Ntoutsi, Theodore Dalamagas, Bernd Bischl, Julia Herbinger, Giuseppe Casalicchio:
Effector: A Python package for regional explanations. CoRR abs/2404.02629 (2024) - [i116]Susanne Dandl, Kristin Blesch, Timo Freiesleben, Gunnar König, Jan Kapar, Bernd Bischl, Marvin N. Wright:
CountARFactuals - Generating plausible model-agnostic counterfactual explanations with adversarial random forests. CoRR abs/2404.03506 (2024) - [i115]Fiona Katharina Ewald, Ludwig Bothmann, Marvin N. Wright, Bernd Bischl, Giuseppe Casalicchio, Gunnar König:
A Guide to Feature Importance Methods for Scientific Inference. CoRR abs/2404.12862 (2024) - [i114]Susanne Dandl, Marc Becker, Bernd Bischl, Giuseppe Casalicchio, Ludwig Bothmann:
mlr3summary: Concise and interpretable summaries for machine learning models. CoRR abs/2404.16899 (2024) - [i113]Moritz Herrmann, F. Julian D. Lange, Katharina Eggensperger, Giuseppe Casalicchio, Marcel Wever, Matthias Feurer, David Rügamer, Eyke Hüllermeier, Anne-Laure Boulesteix, Bernd Bischl:
Position: Why We Must Rethink Empirical Research in Machine Learning. CoRR abs/2405.02200 (2024) - [i112]Thomas Nagler, Lennart Schneider, Bernd Bischl, Matthias Feurer:
Reshuffling Resampling Splits Can Improve Generalization of Hyperparameter Optimization. CoRR abs/2405.15393 (2024) - [i111]Yang Zhang, Yawei Li, Xinpeng Wang, Qianli Shen, Barbara Plank, Bernd Bischl, Mina Rezaei, Kenji Kawaguchi:
FinerCut: Finer-grained Interpretable Layer Pruning for Large Language Models. CoRR abs/2405.18218 (2024) - [i110]Marius Lindauer, Florian Karl, Anne Klier, Julia Moosbauer, Alexander Tornede, Andreas Müller, Frank Hutter, Matthias Feurer, Bernd Bischl:
Position: A Call to Action for a Human-Centered AutoML Paradigm. CoRR abs/2406.03348 (2024) - [i109]Lukas Burk, John Zobolas, Bernd Bischl, Andreas Bender, Marvin N. Wright, Raphael Sonabend:
A Large-Scale Neutral Comparison Study of Survival Models on Low-Dimensional Data. CoRR abs/2406.04098 (2024) - [i108]Hubert Baniecki, Giuseppe Casalicchio, Bernd Bischl, Przemyslaw Biecek:
On the Robustness of Global Feature Effect Explanations. CoRR abs/2406.09069 (2024) - [i107]Hubert Baniecki, Giuseppe Casalicchio, Bernd Bischl, Przemyslaw Biecek:
Efficient and Accurate Explanation Estimation with Distribution Compression. CoRR abs/2406.18334 (2024) - [i106]Hannah Schulz-Kümpel, Sebastian Fischer, Thomas Nagler, Anne-Laure Boulesteix, Bernd Bischl, Roman Hornung:
Constructing Confidence Intervals for 'the' Generalization Error - a Comprehensive Benchmark Study. CoRR abs/2409.18836 (2024) - 2023
- [j44]Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler:
Auxiliary Cross-Modal Representation Learning With Triplet Loss Functions for Online Handwriting Recognition. IEEE Access 11: 94148-94172 (2023) - [j43]Sai Rahul Kaminwar, Jann Goschenhofer, Janek Thomas, Ingo Thon, Bernd Bischl:
Structured Verification of Machine Learning Models in Industrial Settings. Big Data 11(3): 181-198 (2023) - [j42]Mina Rezaei, Farzin Soleymani, Bernd Bischl, Shekoofeh Azizi:
Deep Bregman divergence for self-supervised representations learning. Comput. Vis. Image Underst. 235: 103801 (2023) - [j41]Daniel Schalk, Bernd Bischl, David Rügamer:
Accelerated Componentwise Gradient Boosting Using Efficient Data Representation and Momentum-Based Optimization. J. Comput. Graph. Stat. 32(2): 631-641 (2023) - [j40]Daniel Schalk, Verena S. Hoffmann, Bernd Bischl, Ulrich Mansmann:
dsBinVal: Conducting distributed ROC analysis using DataSHIELD. J. Open Source Softw. 8(83): 4545 (2023) - [j39]David Rügamer, Chris Kolb, Cornelius Fritz, Florian Pfisterer, Philipp Kopper, Bernd Bischl, Ruolin Shen, Christina Bukas, Lisa Barros de Andrade e Sousa, Dominik Thalmeier, Philipp F. M. Baumann, Lucas Kook, Nadja Klein, Christian L. Müller:
deepregression: A Flexible Neural Network Framework for Semi-Structured Deep Distributional Regression. J. Stat. Softw. 105(2) (2023) - [j38]Florian Pfisterer, Siyi Wei, Sebastian J. Vollmer, Michel Lang, Bernd Bischl:
Fairness Audits and Debiasing Using \pkg{mlr3fairness}. R J. 15(1): 234-253 (2023) - [j37]Florian Karl, Tobias Pielok, Julia Moosbauer, Florian Pfisterer, Stefan Coors, Martin Binder, Lennart Schneider, Janek Thomas, Jakob Richter, Michel Lang, Eduardo C. Garrido-Merchán, Jürgen Branke, Bernd Bischl:
Multi-Objective Hyperparameter Optimization in Machine Learning - An Overview. ACM Trans. Evol. Learn. Optim. 3(4): 16:1-16:50 (2023) - [j36]Bernd Bischl, Martin Binder, Michel Lang, Tobias Pielok, Jakob Richter, Stefan Coors, Janek Thomas, Theresa Ullmann, Marc Becker, Anne-Laure Boulesteix, Difan Deng, Marius Lindauer:
Hyperparameter optimization: Foundations, algorithms, best practices, and open challenges. WIREs Data. Mining. Knowl. Discov. 13(2) (2023) - [c91]Daniel Saggau, Mina Rezaei, Bernd Bischl, Ilias Chalkidis:
Efficient Document Embeddings via Self-Contrastive Bregman Divergence Learning. ACL (Findings) 2023: 12181-12190 - [c90]Emilio Dorigatti, Benjamin Schubert, Bernd Bischl, David Rügamer:
Frequentist Uncertainty Quantification in Semi-Structured Neural Networks. AISTATS 2023: 1924-1941 - [c89]Sarah Segel, Helena Graf, Alexander Tornede, Bernd Bischl, Marius Lindauer:
Symbolic Explanations for Hyperparameter Optimization. AutoML 2023: 2/1-22 - [c88]Lennart Oswald Purucker, Lennart Schneider, Marie Anastacio, Joeran Beel, Bernd Bischl, Holger H. Hoos:
Q(D)O-ES: Population-based Quality (Diversity) Optimisation for Post Hoc Ensemble Selection in AutoML. AutoML 2023: 10/1-34 - [c87]Amadeu Scheppach, Hüseyin Anil Gündüz, Emilio Dorigatti, Philipp C. Münch, Alice C. McHardy, Bernd Bischl, Mina Rezaei, Martin Binder:
Neural Architecture Search for Genomic Sequence Data. CIBCB 2023: 1-10 - [c86]Raphael Patrick Prager, Konstantin Dietrich, Lennart Schneider, Lennart Schäpermeier, Bernd Bischl, Pascal Kerschke, Heike Trautmann, Olaf Mersmann:
Neural Networks as Black-Box Benchmark Functions Optimized for Exploratory Landscape Features. FOGA 2023: 129-139 - [c85]Lennart Schneider, Bernd Bischl, Janek Thomas:
Multi-Objective Optimization of Performance and Interpretability of Tabular Supervised Machine Learning Models. GECCO 2023: 538-547 - [c84]Ivo Couckuyt, Sebastian Rojas-Gonzalez, Jürgen Branke, Bernd Bischl:
Bayesian Optimization. GECCO Companion 2023: 895-912 - [c83]Matthias Aßenmacher, Lukas Rauch, Jann Goschenhofer, Andreas Stephan, Bernd Bischl, Benjamin Roth, Bernhard Sick:
Towards Enhancing Deep Active Learning with Weak Supervision and Constrained Clustering. IAL@PKDD/ECML 2023: 65-73 - [c82]Tobias Pielok, Bernd Bischl, David Rügamer:
Approximate Bayesian Inference with Stein Functional Variational Gradient Descent. ICLR 2023 - [c81]Hüseyin Anil Gündüz, Sheetal Giri, Martin Binder, Bernd Bischl, Mina Rezaei:
Uncertainty Quantification for Deep Learning Models Predicting the Regulatory Activity of DNA Sequences. ICMLA 2023: 566-573 - [c80]Matthias Feurer, Katharina Eggensperger, Edward Bergman, Florian Pfisterer, Bernd Bischl, Frank Hutter:
Mind the Gap: Measuring Generalization Performance Across Multiple Objectives. IDA 2023: 130-142 - [c79]Jann Goschenhofer, Bernd Bischl, Zsolt Kira:
ConstraintMatch for Semi-constrained Clustering. IJCNN 2023: 1-10 - [c78]Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer:
Cascaded Latent Diffusion Models for High-Resolution Chest X-ray Synthesis. PAKDD (3) 2023: 180-191 - [c77]Lukas Rauch, Matthias Aßenmacher, Denis Huseljic, Moritz Wirth, Bernd Bischl, Bernhard Sick:
ActiveGLAE: A Benchmark for Deep Active Learning with Transformers. ECML/PKDD (1) 2023: 55-74 - [c76]Jonas Gregor Wiese, Lisa Wimmer, Theodore Papamarkou, Bernd Bischl, Stephan Günnemann, David Rügamer:
Towards Efficient MCMC Sampling in Bayesian Neural Networks by Exploiting Symmetry. ECML/PKDD (1) 2023: 459-474 - [c75]Susanne Dandl, Giuseppe Casalicchio, Bernd Bischl, Ludwig Bothmann:
Interpretable Regional Descriptors: Hyperbox-Based Local Explanations. ECML/PKDD (3) 2023: 479-495 - [c74]Lisa Wimmer, Yusuf Sale, Paul Hofman, Bernd Bischl, Eyke Hüllermeier:
Quantifying aleatoric and epistemic uncertainty in machine learning: Are conditional entropy and mutual information appropriate measures? UAI 2023: 2282-2292 - [c73]Christoph Molnar, Timo Freiesleben, Gunnar König, Julia Herbinger, Tim Reisinger, Giuseppe Casalicchio, Marvin N. Wright, Bernd Bischl:
Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating Process. xAI (1) 2023: 456-479 - [i105]Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler:
Representation Learning for Tablet and Paper Domain Adaptation in Favor of Online Handwriting Recognition. CoRR abs/2301.06293 (2023) - [i104]Hilde J. P. Weerts, Florian Pfisterer, Matthias Feurer, Katharina Eggensperger, Edward Bergman, Noor H. Awad, Joaquin Vanschoren, Mykola Pechenizkiy, Bernd Bischl, Frank Hutter:
Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML. CoRR abs/2303.08485 (2023) - [i103]Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer:
Cascaded Latent Diffusion Models for High-Resolution Chest X-ray Synthesis. CoRR abs/2303.11224 (2023) - [i102]Jonas Gregor Wiese, Lisa Wimmer, Theodore Papamarkou, Bernd Bischl, Stephan Günnemann, David Rügamer:
Towards Efficient MCMC Sampling in Bayesian Neural Networks by Exploiting Symmetry. CoRR abs/2304.02902 (2023) - [i101]Susanne Dandl, Andreas Hofheinz, Martin Binder, Bernd Bischl, Giuseppe Casalicchio:
counterfactuals: An R Package for Counterfactual Explanation Methods. CoRR abs/2304.06569 (2023) - [i100]Felix Ott, Lucas Heublein, David Rügamer, Bernd Bischl, Christopher Mutschler:
Fusing Structure from Motion and Simulation-Augmented Pose Regression from Optical Flow for Challenging Indoor Environments. CoRR abs/2304.07250 (2023) - [i99]Susanne Dandl, Giuseppe Casalicchio, Bernd Bischl, Ludwig Bothmann:
Interpretable Regional Descriptors: Hyperbox-Based Local Explanations. CoRR abs/2305.02780 (2023) - [i98]Daniel Saggau, Mina Rezaei, Bernd Bischl, Ilias Chalkidis:
Efficient Document Embeddings via Self-Contrastive Bregman Divergence Learning. CoRR abs/2305.16031 (2023) - [i97]Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer:
Constrained Probabilistic Mask Learning for Task-specific Undersampled MRI Reconstruction. CoRR abs/2305.16376 (2023) - [i96]Julia Herbinger, Bernd Bischl, Giuseppe Casalicchio:
Decomposing Global Feature Effects Based on Feature Interactions. CoRR abs/2306.00541 (2023) - [i95]Lukas Rauch, Matthias Aßenmacher, Denis Huseljic, Moritz Wirth, Bernd Bischl, Bernhard Sick:
ActiveGLAE: A Benchmark for Deep Active Learning with Transformers. CoRR abs/2306.10087 (2023) - [i94]Chris Kolb, Christian L. Müller, Bernd Bischl, David Rügamer:
Smoothing the Edges: A General Framework for Smooth Optimization in Sparse Regularization using Hadamard Overparametrization. CoRR abs/2307.03571 (2023) - [i93]Ibrahim Tolga Öztürk, Rostislav Nedelchev, Christian Heumann, Esteban Garces Arias, Marius Roger, Bernd Bischl, Matthias Aßenmacher:
How Different Is Stereotypical Bias Across Languages? CoRR abs/2307.07331 (2023) - [i92]Lennart Schneider, Bernd Bischl, Janek Thomas:
Multi-Objective Optimization of Performance and Interpretability of Tabular Supervised Machine Learning Models. CoRR abs/2307.08175 (2023) - [i91]Lennart Purucker, Lennart Schneider, Marie Anastacio, Joeran Beel, Bernd Bischl, Holger H. Hoos:
Q(D)O-ES: Population-based Quality (Diversity) Optimisation for Post Hoc Ensemble Selection in AutoML. CoRR abs/2307.08364 (2023) - [i90]Yawei Li, Yang Zhang, Kenji Kawaguchi, Ashkan Khakzar, Bernd Bischl, Mina Rezaei:
A Dual-Perspective Approach to Evaluating Feature Attribution Methods. CoRR abs/2308.08949 (2023) - [i89]Amirhossein Vahidi, Lisa Wimmer, Hüseyin Anil Gündüz, Bernd Bischl, Eyke Hüllermeier, Mina Rezaei:
Diversified Ensemble of Independent Sub-Networks for Robust Self-Supervised Representation Learning. CoRR abs/2308.14705 (2023) - [i88]Amirhossein Vahidi, Simon Schoßer, Lisa Wimmer, Yawei Li, Bernd Bischl, Eyke Hüllermeier, Mina Rezaei:
Probabilistic Self-supervised Learning via Scoring Rules Minimization. CoRR abs/2309.02048 (2023) - [i87]Holger Löwe, Christian A. Scholbeck, Christian Heumann, Bernd Bischl, Giuseppe Casalicchio:
fmeffects: An R Package for Forward Marginal Effects. CoRR abs/2310.02008 (2023) - [i86]Yang Zhang, Yawei Li, Hannah Brown, Mina Rezaei, Bernd Bischl, Philip H. S. Torr, Ashkan Khakzar, Kenji Kawaguchi:
AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments. CoRR abs/2310.06514 (2023) - [i85]Roman Hornung, Malte Nalenz, Lennart Schneider, Andreas Bender, Ludwig Bothmann, Bernd Bischl, Thomas Augustin, Anne-Laure Boulesteix:
Evaluating machine learning models in non-standard settings: An overview and new findings. CoRR abs/2310.15108 (2023) - [i84]Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer:
Unreading Race: Purging Protected Features from Chest X-ray Embeddings. CoRR abs/2311.01349 (2023) - [i83]Jann Goschenhofer, Bernd Bischl, Zsolt Kira:
ConstraintMatch for Semi-constrained Clustering. CoRR abs/2311.15395 (2023) - [i82]Christian A. Scholbeck, Julia Moosbauer, Giuseppe Casalicchio, Hoshin Gupta, Bernd Bischl, Christian Heumann:
Position Paper: Bridging the Gap Between Machine Learning and Sensitivity Analysis. CoRR abs/2312.13234 (2023) - 2022
- [j35]Florian Pargent, Florian Pfisterer, Janek Thomas, Bernd Bischl:
Regularized target encoding outperforms traditional methods in supervised machine learning with high cardinality features. Comput. Stat. 37(5): 2671-2692 (2022) - [j34]Quay Au, Julia Herbinger, Clemens Stachl, Bernd Bischl, Giuseppe Casalicchio:
Grouped feature importance and combined features effect plot. Data Min. Knowl. Discov. 36(4): 1401-1450 (2022) - [j33]Felix Ott, David Rügamer, Lucas Heublein, Tim Hamann, Jens Barth, Bernd Bischl, Christopher Mutschler:
Benchmarking online sequence-to-sequence and character-based handwriting recognition from IMU-enhanced pens. Int. J. Document Anal. Recognit. 25(4): 385-414 (2022) - [j32]Julia Moosbauer, Martin Binder, Lennart Schneider, Florian Pfisterer, Marc Becker, Michel Lang, Lars Kotthoff, Bernd Bischl:
Automated Benchmark-Driven Design and Explanation of Hyperparameter Optimizers. IEEE Trans. Evol. Comput. 26(6): 1336-1350 (2022) - [c72]Julia Herbinger, Bernd Bischl, Giuseppe Casalicchio:
REPID: Regional Effect Plots with implicit Interaction Detection. AISTATS 2022: 10209-10233 - [c71]Florian Pfisterer, Lennart Schneider, Julia Moosbauer, Martin Binder, Bernd Bischl:
YAHPO Gym - An Efficient Multi-Objective Multi-Fidelity Benchmark for Hyperparameter Optimization. AutoML 2022: 3/1-39 - [c70]Lennart Schneider, Florian Pfisterer, Paul Kent, Jürgen Branke, Bernd Bischl, Janek Thomas:
Tackling Neural Architecture Search With Quality Diversity Optimization. AutoML 2022: 9/1-30 - [c69]Susanne Dandl, Florian Pfisterer, Bernd Bischl:
Multi-objective counterfactual fairness. GECCO Companion 2022: 328-331 - [c68]Lennart Schneider, Florian Pfisterer, Janek Thomas, Bernd Bischl:
A collection of quality diversity optimization problems derived from hyperparameter optimization of machine learning models. GECCO Companion 2022: 2136-2142 - [c67]Mina Rezaei, Emilio Dorigatti, David Rügamer, Bernd Bischl:
Joint Debiased Representation Learning and Imbalanced Data Clustering. ICDM (Workshops) 2022: 55-62 - [c66]Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler:
Representation Learning for Tablet and Paper Domain Adaptation in Favor of Online Handwriting Recognition. ICPR Workshops (1) 2022: 373-383 - [c65]Andreas Klaß, Sven M. Lorenz, Martin W. Lauer-Schmaltz, David Rügamer, Bernd Bischl, Christopher Mutschler, Felix Ott:
Uncertainty-aware Evaluation of Time-series Classification for Online Handwriting Recognition with Domain Shift. STRL@IJCAI 2022 - [c64]Mina Rezaei, Janne J. Näppi, Bernd Bischl, Hiroyuki Yoshida:
Bayesian uncertainty estimation for detection of long-tail and unseen conditions in abdominal images. Computer-Aided Diagnosis 2022 - [c63]Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer:
Implicit Embeddings via GAN Inversion for High Resolution Chest Radiographs. MAD@MICCAI 2022: 22-32 - [c62]Farzin Soleymani, Mohammad Eslami, Tobias Elze, Bernd Bischl, Mina Rezaei:
Deep variational clustering framework for self-labeling large-scale medical images. Image Processing 2022 - [c61]Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler:
Domain Adaptation for Time-Series Classification to Mitigate Covariate Shift. ACM Multimedia 2022: 5934-5943 - [c60]Mehmet Ozgur Turkoglu, Alexander Becker, Hüseyin Anil Gündüz, Mina Rezaei, Bernd Bischl, Rodrigo Caye Daudt, Stefano D'Aronco, Jan D. Wegner, Konrad Schindler:
FiLM-Ensemble: Probabilistic Deep Learning via Feature-wise Linear Modulation. NeurIPS 2022 - [c59]Philipp Kopper, Simon Wiegrebe, Bernd Bischl, Andreas Bender, David Rügamer:
DeepPAMM: Deep Piecewise Exponential Additive Mixed Models for Complex Hazard Structures in Survival Analysis. PAKDD (2) 2022: 249-261 - [c58]David Rügamer, Andreas Bender, Simon Wiegrebe, Daniel Racek, Bernd Bischl, Christian L. Müller, Clemens Stachl:
Factorized Structured Regression for Large-Scale Varying Coefficient Models. ECML/PKDD (5) 2022: 20-35 - [c57]Difan Deng, Florian Karl, Frank Hutter, Bernd Bischl, Marius Lindauer:
Efficient Automated Deep Learning for Time Series Forecasting. ECML/PKDD (3) 2022: 664-680 - [c56]Lennart Schneider, Lennart Schäpermeier, Raphael Patrick Prager, Bernd Bischl, Heike Trautmann, Pascal Kerschke:
HPO ˟ ELA: Investigating Hyperparameter Optimization Landscapes by Means of Exploratory Landscape Analysis. PPSN (1) 2022: 575-589 - [c55]Ludwig Bothmann, Sven Strickroth, Giuseppe Casalicchio, David Rügamer, Marius Lindauer, Fabian Scheipl, Bernd Bischl:
Developing Open Source Educational Resources for Machine Learning and Data Science. Teaching ML 2022: 1-6 - [c54]Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler:
Joint Classification and Trajectory Regression of Online Handwriting using a Multi-Task Learning Approach. WACV 2022: 1244-1254 - [i81]Christian A. Scholbeck, Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl, Christian Heumann:
Marginal Effects for Non-Linear Prediction Functions. CoRR abs/2201.08837 (2022) - [i80]Emilio Dorigatti, Jann Goschenhofer, Benjamin Schubert, Mina Rezaei, Bernd Bischl:
Positive-Unlabeled Learning with Uncertainty-aware Pseudo-label Selection. CoRR abs/2201.13192 (2022) - [i79]Felix Ott, David Rügamer, Lucas Heublein, Tim Hamann, Jens Barth, Bernd Bischl, Christopher Mutschler:
Benchmarking Online Sequence-to-Sequence and Character-based Handwriting Recognition from IMU-Enhanced Pens. CoRR abs/2202.07036 (2022) - [i78]