default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 32 matches
- 2024
- Mirco Theile:
Modeling Planning, Control, and Scheduling of Cyber-Physical Systems for Reinforcement Learning. Technical University of Munich, Germany, 2024 - Xue Yang, Enda Howley, Michael Schukat:
ADT: Time series anomaly detection for cyber-physical systems via deep reinforcement learning. Comput. Secur. 141: 103825 (2024) - Fatima Ezzahra Achamrah, Ali Attajer:
Multi-objective reinforcement learning-based framework for solving selective maintenance problems in reconfigurable cyber-physical manufacturing systems. Int. J. Prod. Res. 62(10): 3460-3482 (2024) - Xinge Li, Xiaoya Hu, Tao Jiang:
Dual-Reinforcement-Learning-Based Attack Path Prediction for 5G Industrial Cyber-Physical Systems. IEEE Internet Things J. 11(1): 50-58 (2024) - Shixiong Jiang, Mengyu Liu, Fanxin Kong:
Vulnerability Analysis for Safe Reinforcement Learning in Cyber-Physical Systems. ICCPS 2024: 77-86 - Shixiong Jiang, Mengyu Liu, Fanxin Kong:
Demo: Vulnerability Analysis for STL-Guided Safe Reinforcement Learning in Cyber-Physical Systems. RTAS 2024: 400-401 - 2023
- Zhaojun Hao, Francesco Di Maio, Enrico Zio:
A sequential decision problem formulation and deep reinforcement learning solution of the optimization of O&M of cyber-physical energy systems (CPESs) for reliable and safe power production and supply. Reliab. Eng. Syst. Saf. 235: 109231 (2023) - Yan Yu, Wen Yang, Wenjie Ding, Jiayu Zhou:
Reinforcement Learning Solution for Cyber-Physical Systems Security Against Replay Attacks. IEEE Trans. Inf. Forensics Secur. 18: 2583-2595 (2023) - Alberto Robles Enciso, Ricardo Robles-Enciso, Antonio F. Skarmeta:
Multi-agent Reinforcement Learning-Based Energy Orchestrator for Cyber-Physical Systems. ALGOCLOUD 2023: 100-114 - Elias Modrakowski, Niklas Braun, Mehrnoush Hajnorouzi, Andreas Eich, Narges Javaheri, Richard Doornbos, Sebastian Moritz, Jan-Willem Bikker, Rutger van Beek:
Architecture for Digital Twin-Based Reinforcement Learning Optimization of Cyber-Physical Systems. ECSA (Tracks, Workshops and Doctoral Symposium) 2023: 257-271 - Zhengcheng Dong, Mian Tang, Meng Tian:
Allocating defense resources for spatial cyber-physical power systems based on deep reinforcement learning. ICPS 2023: 1-6 - Xin Qin, Nikos Aréchiga, Jyotirmoy Deshmukh, Andrew Best:
Robust Testing for Cyber-Physical Systems using Reinforcement Learning. MEMOCODE 2023: 36-46 - Hyunsoo Lee, Soohyun Park, Won Joon Yun, Soyi Jung, Joongheon Kim:
Situation-Aware Deep Reinforcement Learning for Autonomous Nonlinear Mobility Control in Cyber-Physical Loitering Munition Systems. CoRR abs/2301.00124 (2023) - (Withdrawn) Deep Reinforcement Learning for Online Error Detection in Cyber-Physical Systems. CoRR abs/2302.01567 (2023)
- 2022
- Minrui Xu, Jialiang Peng, Brij B. Gupta, Jiawen Kang, Zehui Xiong, Zhenni Li, Ahmed A. Abd El-Latif:
Multiagent Federated Reinforcement Learning for Secure Incentive Mechanism in Intelligent Cyber-Physical Systems. IEEE Internet Things J. 9(22): 22095-22108 (2022) - J. Stanly Jayaprakash, M. Jasmine Pemeena Priyadarsini, Parameshachari Bidare Divakarachari, Hamid Reza Karimi, Sasikumar Gurumoorthy:
Deep Q-Network with Reinforcement Learning for Fault Detection in Cyber-Physical Systems. J. Circuits Syst. Comput. 31(9): 2250158:1-2250158:34 (2022) - Timothy Rupprecht, Yanzhi Wang:
A survey for deep reinforcement learning in markovian cyber-physical systems: Common problems and solutions. Neural Networks 153: 13-36 (2022) - Ryan Silva, Cameron Hickert, Nicolas R. Sarfaraz, Jeff Brush, Josh Silbermann, Tamim Sookoor:
AlphaSOC: Reinforcement Learning-based Cybersecurity Automation for Cyber-Physical Systems. ICCPS 2022: 290-291 - Eunho Cho, Gwangoo Yeo, Eunkyoung Jee, Doo-Hwan Bae:
Anomaly-Aware Adaptation Approach for Self-Adaptive Cyber-Physical System of Systems Using Reinforcement Learning. SoSE 2022: 7-12 - Mohamad Chehadeh, Igor Boiko, Yahya H. Zweiri:
The Role of Time Delay in Sim2real Transfer of Reinforcement Learning for Cyber-Physical Systems. CoRR abs/2209.15216 (2022) - 2021
- Shikang Xu, Israel Koren, C. Mani Krishna:
Adaptive workload adjustment for cyber-physical systems using deep reinforcement learning. Sustain. Comput. Informatics Syst. 30: 100525 (2021) - Shaohua Zhang, Shuang Liu, Jun Sun, Yuqi Chen, Wenzhi Huang, Jinyi Liu, Jian Liu, Jianye Hao:
FIGCPS: Effective Failure-inducing Input Generation for Cyber-Physical Systems with Deep Reinforcement Learning. ASE 2021: 555-567 - 2020
- Gwangpyo Yoo, Minjong Yoo, Ikjun Yeom, Honguk Woo:
rocorl: Transferable Reinforcement Learning-Based Robust Control for Cyber-Physical Systems With Limited Data Updates. IEEE Access 8: 225370-225383 (2020) - Alex S. Leong, Arunselvan Ramaswamy, Daniel E. Quevedo, Holger Karl, Ling Shi:
Deep reinforcement learning for wireless sensor scheduling in cyber-physical systems. Autom. 113: 108759 (2020) - Linfei Yin, Shengyuan Li, Hui Liu:
Lazy reinforcement learning for real-time generation control of parallel cyber-physical-social energy systems. Eng. Appl. Artif. Intell. 88 (2020) - Jupiter Bakakeu, Dominik Kißkalt, Jörg Franke, Shirin Baer, Hans-Henning Klos, Jörn Peschke:
Multi-Agent Reinforcement Learning for the Energy Optimization of Cyber-Physical Production Systems. CCECE 2020: 1-8 - Arnab Bhattacharya, Thiagarajan Ramachandran, Sandeep Banik, Chase P. Dowling, Shaunak D. Bopardikar:
Automated Adversary Emulation for Cyber-Physical Systems via Reinforcement Learning. ISI 2020: 1-6 - Joseph Khoury, Mohamed Nassar:
A Hybrid Game Theory and Reinforcement Learning Approach for Cyber-Physical Systems Security. NOMS 2020: 1-9 - Arnab Bhattacharya, Thiagarajan Ramachandran, Sandeep Banik, Chase P. Dowling, Shaunak D. Bopardikar:
Automated Adversary Emulation for Cyber-Physical Systems via Reinforcement Learning. CoRR abs/2011.04635 (2020) - 2019
- Xing Liu, Hansong Xu, Weixian Liao, Wei Yu:
Reinforcement Learning for Cyber-Physical Systems. ICII 2019: 318-327
skipping 2 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-11-06 04:44 CET from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint