Maintained by Difan Deng and Marius Lindauer.
The following list considers papers related to neural architecture search. It is by no means complete. If you miss a paper on the list, please let us know.
Please note that although NAS methods steadily improve, the quality of empirical evaluations in this field are still lagging behind compared to other areas in machine learning, AI and optimization. We would therefore like to share some best practices for empirical evaluations of NAS methods, which we believe will facilitate sustained and measurable progress in the field. If you are interested in a teaser, please read our blog post or directly jump to our checklist.
Transformers have gained increasing popularity in different domains. For a comprehensive list of papers focusing on Neural Architecture Search for Transformer-Based spaces, the awesome-transformer-search repo is all you need.
2024
Liao, Yugang
Early-quit evolutionary search of hybrid channel attention networks for image classification Proceedings Article
In: Pachori, Ram Bilas; Chen, Lei (Ed.): International Conference on Image, Signal Processing, and Pattern Recognition (ISPP 2024), pp. 131801B, International Society for Optics and Photonics SPIE, 2024.
@inproceedings{10.1117/12.3033783,
title = {Early-quit evolutionary search of hybrid channel attention networks for image classification},
author = {Yugang Liao},
editor = {Ram Bilas Pachori and Lei Chen},
url = {https://doi.org/10.1117/12.3033783},
doi = {10.1117/12.3033783},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
booktitle = {International Conference on Image, Signal Processing, and Pattern Recognition (ISPP 2024)},
volume = {13180},
pages = {131801B},
publisher = {SPIE},
organization = {International Society for Optics and Photonics},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Zito, Francesco; Cutello, Vincenzo; Pavone, Mario
A General-Purpose Neural Architecture Search Algorithm for Building Deep Neural Networks Proceedings Article
In: Sevaux, Marc; Olteanu, Alexandru-Liviu; Pardo, Eduardo G.; Sifaleras, Angelo; Makboul, Salma (Ed.): Metaheuristics, pp. 126–141, Springer Nature Switzerland, Cham, 2024, ISBN: 978-3-031-62922-8.
@inproceedings{10.1007/978-3-031-62922-8_9,
title = {A General-Purpose Neural Architecture Search Algorithm for Building Deep Neural Networks},
author = {Francesco Zito and Vincenzo Cutello and Mario Pavone},
editor = {Marc Sevaux and Alexandru-Liviu Olteanu and Eduardo G. Pardo and Angelo Sifaleras and Salma Makboul},
url = {https://link.springer.com/chapter/10.1007/978-3-031-62922-8_9},
isbn = {978-3-031-62922-8},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
booktitle = {Metaheuristics},
pages = {126–141},
publisher = {Springer Nature Switzerland},
address = {Cham},
abstract = {With the increasing availability of data and the development of powerful algorithms, deep neural networks have become an essential tool for all sectors. However, it can be challenging to automate the process of building and tuning them, due to the rapid growth of data and their complexity. The demand for handling large amounts of data has led to an increasing number of hidden layers and hyperparameters. A framework or methodology to design the architecture of deep neural networks will be crucial in the future, as it could significantly speed up the process of using deep learning models. We present here a first attempt to create an algorithm that combines aspects of Neural Architecture Search and Hyperparameter Optimization to build and optimize a neural network architecture. The particularity of our algorithm is that it is able to learn how to link neural layers of different types to create increasingly performant neural network architectures. We conducted experiments on four different tasks, including regression, binary and multi-classification, and forecasting, to compare our algorithm with common machine learning models.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Su, Yuanxin; Ang, Li-minn; Seng, Kah Phooi; Smith, Jeremy
Deep Learning and Neural Architecture Search for Optimizing Binary Neural Network Image Super Resolution Journal Article
In: Biomimetics, vol. 9, no. 6, 2024, ISSN: 2313-7673.
@article{biomimetics9060369,
title = {Deep Learning and Neural Architecture Search for Optimizing Binary Neural Network Image Super Resolution},
author = {Yuanxin Su and Li-minn Ang and Kah Phooi Seng and Jeremy Smith},
url = {https://www.mdpi.com/2313-7673/9/6/369},
doi = {10.3390/biomimetics9060369},
issn = {2313-7673},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {Biomimetics},
volume = {9},
number = {6},
abstract = {The evolution of super-resolution (SR) technology has seen significant advancements through the adoption of deep learning methods. However, the deployment of such models by resource-constrained devices necessitates models that not only perform efficiently, but also conserve computational resources. Binary neural networks (BNNs) offer a promising solution by minimizing the data precision to binary levels, thus reducing the computational complexity and memory requirements. However, for BNNs, an effective architecture is essential due to their inherent limitations in representing information. Designing such architectures traditionally requires extensive computational resources and time. With the advancement in neural architecture search (NAS), differentiable NAS has emerged as an attractive solution for efficiently crafting network structures. In this paper, we introduce a novel and efficient binary network search method tailored for image super-resolution tasks. We adapt the search space specifically for super resolution to ensure it is optimally suited for the requirements of such tasks. Furthermore, we incorporate Libra Parameter Binarization (Libra-PB) to maximize information retention during forward propagation. Our experimental results demonstrate that the network structures generated by our method require only a third of the parameters, compared to conventional methods, and yet deliver comparable performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Mu, Caihong; Yu, Haikun; Zhang, Keyang; Tian, Qiang; Liu, Yi
AutoMaster: Differentiable Graph Neural Network Architecture Search for Collaborative Filtering Recommendation Proceedings Article
In: Stefanidis, Kostas; Systä, Kari; Matera, Maristella; Heil, Sebastian; Kondylakis, Haridimos; Quintarelli, Elisa (Ed.): Web Engineering, pp. 82–98, Springer Nature Switzerland, Cham, 2024, ISBN: 978-3-031-62362-2.
@inproceedings{10.1007/978-3-031-62362-2_6,
title = {AutoMaster: Differentiable Graph Neural Network Architecture Search for Collaborative Filtering Recommendation},
author = {Caihong Mu and Haikun Yu and Keyang Zhang and Qiang Tian and Yi Liu},
editor = {Kostas Stefanidis and Kari Systä and Maristella Matera and Sebastian Heil and Haridimos Kondylakis and Elisa Quintarelli},
isbn = {978-3-031-62362-2},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
booktitle = {Web Engineering},
pages = {82–98},
publisher = {Springer Nature Switzerland},
address = {Cham},
abstract = {Graph Neural Networks (GNNs) have been widely applied in Collaborative Filtering (CF) and have demonstrated powerful capabilities in recommender systems (RSs). In recent years, there has been a heated debate on whether the non-linear propagation mechanism in Graph Convolutional Networks (GCNs) is suitable for CF tasks, and the performance of linear propagation is believed to be superior to non-linear propagation mainly in the field of RSs. Therefore, it is necessary to reexamine this issue: (1) whether linear propagation generally outperforms non-linear propagation, and (2) whether a combination of linear and non-linear propagation can be applied to CF tasks to achieve better accuracy. Furthermore, most existing studies design a single model architecture tailored to specific data or scenarios, and there remains a challenging and worthwhile problem to obtain the best-performing model in new recommendation data. To address the above issues, we propose a model called AutoMaster, which implements differentiable graph neural network architecture search for CF recommendation and automatically designs GNN architectures specific to different datasets. We design a compact and representative search space that includes various linear and non-linear graph convolutional layers, and employ a differentiable search strategy to search for the best-performing hybrid architecture in different recommendation datasets. Experimental results on five real-world datasets demonstrate that the GNN automatically achieved by the proposed AutoMaster contains both linear and nonlinear propagation, and outperforms several advanced GNN based CF models designed by the experienced human designers.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Khouy, Mohammed; Jabrane, Younes; Ameur, Mustapha
Optimal U-Net Using Grey Wolf Algorithm for Retinal Vessel Segmentation Proceedings Article
In: 2024 4th International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET), pp. 1-6, 2024.
@inproceedings{10549329,
title = {Optimal U-Net Using Grey Wolf Algorithm for Retinal Vessel Segmentation},
author = {Mohammed Khouy and Younes Jabrane and Mustapha Ameur},
url = {https://ieeexplore.ieee.org/abstract/document/10549329},
doi = {10.1109/IRASET60544.2024.10549329},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
booktitle = {2024 4th International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET)},
pages = {1-6},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Jiang, Zhichao; Wang, Hongsong; Teng, Xi; Li, Baopu
Robust 3D Face Alignment with Multi-Path Neural Architecture Search Technical Report
2024.
@techreport{jiang2024robust3dfacealignment,
title = {Robust 3D Face Alignment with Multi-Path Neural Architecture Search},
author = {Zhichao Jiang and Hongsong Wang and Xi Teng and Baopu Li},
url = {https://arxiv.org/abs/2406.07873},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Kundu, Triparna; S, Abirami
Enhancing Deep Neural Network Architecture in Spatio- Temporal Forecasting Through Neural Architecture Search Proceedings Article
In: 2024 International Conference on Recent Advances in Electrical, Electronics, Ubiquitous Communication, and Computational Intelligence (RAEEUCCI), pp. 1-5, 2024.
@inproceedings{10547787,
title = {Enhancing Deep Neural Network Architecture in Spatio- Temporal Forecasting Through Neural Architecture Search},
author = {Triparna Kundu and Abirami S},
url = {https://ieeexplore.ieee.org/abstract/document/10547787},
doi = {10.1109/RAEEUCCI61380.2024.10547787},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
booktitle = {2024 International Conference on Recent Advances in Electrical, Electronics, Ubiquitous Communication, and Computational Intelligence (RAEEUCCI)},
pages = {1-5},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Lv, Jindi; Sun, Yanan; Ye, Qing; Feng, Wentao; Lv, Jiancheng
A multiscale neural architecture search framework for multimodal fusion Journal Article
In: Information Sciences, vol. 679, pp. 121005, 2024, ISSN: 0020-0255.
@article{LV2024121005,
title = {A multiscale neural architecture search framework for multimodal fusion},
author = {Jindi Lv and Yanan Sun and Qing Ye and Wentao Feng and Jiancheng Lv},
url = {https://www.sciencedirect.com/science/article/pii/S0020025524009198},
doi = {https://doi.org/10.1016/j.ins.2024.121005},
issn = {0020-0255},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {Information Sciences},
volume = {679},
pages = {121005},
abstract = {Multimodal fusion, a machine learning technique, significantly enhances decision-making by leveraging complementary information extracted from different data modalities. The success of multimodal fusion relies heavily on the design of the fusion scheme. However, this process traditionally depends on manual expertise and exhaustive trials. To tackle this challenge, researchers have undertaken studies on DARTS-based Neural Architecture Search (NAS) variants to automate the search of fusion schemes. In this paper, we present theoretical and empirical evidence that highlights the presence of catastrophic search bias in DARTS-based multimodal fusion methods. This bias traps the search into a deceptive optimal childnet, rendering the entire search process ineffective. To circumvent this phenomenon, we introduce a novel NAS framework for multimodal fusion, featuring a robust search strategy and a meticulously designed multi-scale fusion search space. Significantly, the proposed framework is capable of capturing modality-specific information across multiple scales while achieving an automatic balance between intra-modal and inter-modal information. We conduct extensive experiments on three commonly used multimodal classification tasks from different domains and compare the proposed framework against state-of-the-art approaches. The experimental results demonstrate the superior robustness and high efficiency of the proposed framework.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Amin, Md Hasibul; Mohammadi, Mohammadreza; Zand, Ramtin
Multi-Objective Neural Architecture Search for In-Memory Computing Technical Report
2024.
@techreport{amin2024multiobjectiveneuralarchitecturesearchb,
title = {Multi-Objective Neural Architecture Search for In-Memory Computing},
author = {Md Hasibul Amin and Mohammadreza Mohammadi and Ramtin Zand},
url = {https://arxiv.org/abs/2406.06746},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Wang, Dingrong; Sapkota, Hitesh; Tao, Zhiqiang; Yu, Qi
Reinforced Compressive Neural Architecture Search for Versatile Adversarial Robustness Technical Report
2024.
@techreport{wang2024reinforcedcompressiveneuralarchitecture,
title = {Reinforced Compressive Neural Architecture Search for Versatile Adversarial Robustness},
author = {Dingrong Wang and Hitesh Sapkota and Zhiqiang Tao and Qi Yu},
url = {https://arxiv.org/abs/2406.06792},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Ranasinghe, Piumini; Paranayapa, Thivindu; Ranmal, Dakshina; Meedeniya, Dulani
Hardware-aware Neural Architecture Search for Sound Classification in Constrained Environments Proceedings Article
In: 2024 International Research Conference on Smart Computing and Systems Engineering (SCSE), pp. 1-6, 2024.
@inproceedings{10550556,
title = {Hardware-aware Neural Architecture Search for Sound Classification in Constrained Environments},
author = {Piumini Ranasinghe and Thivindu Paranayapa and Dakshina Ranmal and Dulani Meedeniya},
url = {https://ieeexplore.ieee.org/abstract/document/10550556},
doi = {10.1109/SCSE61872.2024.10550556},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
booktitle = {2024 International Research Conference on Smart Computing and Systems Engineering (SCSE)},
volume = {7},
pages = {1-6},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Zhong, Rui; Cao, Yang; Yu, Jun; Munetomo, Masaharu
Large Language Model Assisted Adversarial Robustness Neural Architecture Search Technical Report
2024.
@techreport{zhong2024largelanguagemodelassisted,
title = {Large Language Model Assisted Adversarial Robustness Neural Architecture Search},
author = {Rui Zhong and Yang Cao and Jun Yu and Masaharu Munetomo},
url = {https://arxiv.org/abs/2406.05433},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
(Ed.)
Seesaw: Compensating for Nonlinear Reduction with Linear Computations in Private Inference Collection
2024.
@collection{<LineBreak>li2024compensating,
title = {Seesaw: Compensating for Nonlinear Reduction with Linear Computations in Private Inference},
author = {Fabing Li and Yuanhao Zhai and Mingyu Gao},
url = {http://people.iiis.tsinghua.edu.cn/~gaomy/pubs/seesaw.icml24.pdf},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
booktitle = {ICML2024 },
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
Trzciński, Maciej; Łukasik, Szymon; Gandomi, Amir H.
Optimizing the Structures of Transformer Neural Networks Using Parallel Simulated Annealing Journal Article
In: Journal of Artificial Intelligence and Soft Computing Research, vol. 14, no. 3, pp. 267–282, 2024.
@article{TrzcińskiŁukasikGandomi+2024+267+282,
title = {Optimizing the Structures of Transformer Neural Networks Using Parallel Simulated Annealing},
author = {Maciej Trzciński and Szymon Łukasik and Amir H. Gandomi},
url = {https://doi.org/10.2478/jaiscr-2024-0015},
doi = {doi:10.2478/jaiscr-2024-0015},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {Journal of Artificial Intelligence and Soft Computing Research},
volume = {14},
number = {3},
pages = {267–282},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Tang, Chenxia
Heterogeneous Learning Rate Scheduling for Neural Architecture Search on Long-Tailed Datasets Technical Report
2024.
@techreport{tang2024heterogeneouslearningratescheduling,
title = {Heterogeneous Learning Rate Scheduling for Neural Architecture Search on Long-Tailed Datasets},
author = {Chenxia Tang},
url = {https://arxiv.org/abs/2406.07028},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Amin, Md Hasibul; Mohammadi, Mohammadreza; Zand, Ramtin
Multi-Objective Neural Architecture Search for In-Memory Computing Technical Report
2024.
@techreport{amin2024multiobjectiveneuralarchitecturesearch,
title = {Multi-Objective Neural Architecture Search for In-Memory Computing},
author = {Md Hasibul Amin and Mohammadreza Mohammadi and Ramtin Zand},
url = {https://arxiv.org/abs/2406.06746},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
C, Prashanth H; Rao, Madhav
Performance Analysis of OFA-NAS ResNet Topologies Across Diverse Hardware Compute Units Proceedings Article
In: Proceedings of the Great Lakes Symposium on VLSI 2024, pp. 604–607, Association for Computing Machinery, Clearwater, FL, USA, 2024, ISBN: 9798400706059.
@inproceedings{10.1145/3649476.3658811,
title = {Performance Analysis of OFA-NAS ResNet Topologies Across Diverse Hardware Compute Units},
author = {Prashanth H C and Madhav Rao},
url = {https://doi.org/10.1145/3649476.3658811},
doi = {10.1145/3649476.3658811},
isbn = {9798400706059},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
booktitle = {Proceedings of the Great Lakes Symposium on VLSI 2024},
pages = {604–607},
publisher = {Association for Computing Machinery},
address = {Clearwater, FL, USA},
series = {GLSVLSI '24},
abstract = {Network architecture search (NAS) is a tedious process and hence a different approach to train a large over parameterized network, followed by a progressively shrinking algorithm towards targeting the best efficient models for hardware platforms is practiced, which is referred to as Once-For-All (OFA) network. This paper focuses on utilizing OFA-defined NAS runs on ResNet topologies for wide range of hardware platforms ranging from workstations CPU, GPU, mobile CPU, GPUs, VPU, and DPU run on Xilnx FPGA. The OFA extracted model runs on the VPU unit offers speed improvement of 86% and 46% over mobile CPU A76, and mobile GPU 630 system respectively. The latency improvement of 100% was achieved for 3-threaded execution over single-thread for the DPU unit. Roofline model of OFA defined NAS extracted ResNet topologies indicated that all models are compute-bound when operated on Threadripper 960X CPU workstation; with 16-threaded execution offers maximum performance for batch size of 16. For GPU workstation, a throughput improvement of 66.66% to 100% was achieved for the OFA NAS generated models when configured to run on 3-threads over single-thread. Roofline performance analysis for the OFA NAS extracted model running on a Xilnx FPGA ZCU102 showed that most of the models are compute-bound for single and multi-threaded execution.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Martyniuk, Darya; Jung, Johannes; Paschke, Adrian
Quantum Architecture Search: A Survey Technical Report
2024.
@techreport{martyniuk2024quantumarchitecturesearchsurvey,
title = {Quantum Architecture Search: A Survey},
author = {Darya Martyniuk and Johannes Jung and Adrian Paschke},
url = {https://arxiv.org/abs/2406.06210},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Nath, Utkarsh; Wang, Yancheng; Yang, Yingzhen
Neural Architecture Search Finds Robust Models by Knowledge Distillation Proceedings Article
In: The 40th Conference on Uncertainty in Artificial Intelligence, 2024.
@inproceedings{<LineBreak>nath2024neural,
title = {Neural Architecture Search Finds Robust Models by Knowledge Distillation},
author = {Utkarsh Nath and Yancheng Wang and Yingzhen Yang},
url = {https://openreview.net/forum?id=S0nrdTCNEn},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
booktitle = {The 40th Conference on Uncertainty in Artificial Intelligence},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Deng, Difan; Lindauer, Marius
Optimizing Time Series Forecasting Architectures: A Hierarchical Neural Architecture Search Approach Technical Report
2024.
@techreport{deng2024optimizingtimeseriesforecasting,
title = {Optimizing Time Series Forecasting Architectures: A Hierarchical Neural Architecture Search Approach},
author = {Difan Deng and Marius Lindauer},
url = {https://arxiv.org/abs/2406.05088},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Jeevidha, S.; Saraswathi, S.
Parametric NAS: Revolutionizing Neural Architecture Search for Lung Colon Cancer Classification Proceedings Article
In: 2024 10th International Conference on Communication and Signal Processing (ICCSP), pp. 1739-1744, 2024.
@inproceedings{10544160,
title = {Parametric NAS: Revolutionizing Neural Architecture Search for Lung Colon Cancer Classification},
author = {S. Jeevidha and S. Saraswathi},
url = {https://ieeexplore.ieee.org/abstract/document/10544160},
doi = {10.1109/ICCSP60870.2024.10544160},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
booktitle = {2024 10th International Conference on Communication and Signal Processing (ICCSP)},
pages = {1739-1744},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Liu, Yang; Zhang, Peng; Gao, Yang; Zhou, Chuan; Li, Zhao; Chen, Hongyang
Combinatorial Optimization with Automated Graph Neural Networks Technical Report
2024.
@techreport{liu2024combinatorialoptimizationautomatedgraph,
title = {Combinatorial Optimization with Automated Graph Neural Networks},
author = {Yang Liu and Peng Zhang and Yang Gao and Chuan Zhou and Zhao Li and Hongyang Chen},
url = {https://arxiv.org/abs/2406.02872},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Fu, Hao; Zhang, Tunhou; Li, Hai; Chen, Yiran
Can Dense Connectivity Benefit Outlier Detection? An Odyssey with NAS Technical Report
2024.
@techreport{fu2024denseconnectivitybenefitoutlier,
title = {Can Dense Connectivity Benefit Outlier Detection? An Odyssey with NAS},
author = {Hao Fu and Tunhou Zhang and Hai Li and Yiran Chen},
url = {https://arxiv.org/abs/2406.01975},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Liu, Linjing; Xiong, Ying; Zheng, Zetian; Huang, Lei; Song, Jiangning; Lin, Qiuzhen; Tang, Buzhou; Wong, Ka-Chun
AutoCancer as an automated multimodal framework for early cancer detection Journal Article
In: iScience, vol. 27, no. 7, pp. 110183, 2024, ISSN: 2589-0042.
@article{LIU2024110183,
title = {AutoCancer as an automated multimodal framework for early cancer detection},
author = {Linjing Liu and Ying Xiong and Zetian Zheng and Lei Huang and Jiangning Song and Qiuzhen Lin and Buzhou Tang and Ka-Chun Wong},
url = {https://www.sciencedirect.com/science/article/pii/S2589004224014081},
doi = {https://doi.org/10.1016/j.isci.2024.110183},
issn = {2589-0042},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {iScience},
volume = {27},
number = {7},
pages = {110183},
abstract = {Summary
Current studies in early cancer detection based on liquid biopsy data often rely on off-the-shelf models and face challenges with heterogeneous data, as well as manually designed data preprocessing pipelines with different parameter settings. To address those challenges, we present AutoCancer, an automated, multimodal, and interpretable transformer-based framework. This framework integrates feature selection, neural architecture search, and hyperparameter optimization into a unified optimization problem with Bayesian optimization. Comprehensive experiments demonstrate that AutoCancer achieves accurate performance in specific cancer types and pan-cancer analysis, outperforming existing methods across three cohorts. We further demonstrated the interpretability of AutoCancer by identifying key gene mutations associated with non-small cell lung cancer to pinpoint crucial factors at different stages and subtypes. The robustness of AutoCancer, coupled with its strong interpretability, underscores its potential for clinical applications in early cancer detection.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Current studies in early cancer detection based on liquid biopsy data often rely on off-the-shelf models and face challenges with heterogeneous data, as well as manually designed data preprocessing pipelines with different parameter settings. To address those challenges, we present AutoCancer, an automated, multimodal, and interpretable transformer-based framework. This framework integrates feature selection, neural architecture search, and hyperparameter optimization into a unified optimization problem with Bayesian optimization. Comprehensive experiments demonstrate that AutoCancer achieves accurate performance in specific cancer types and pan-cancer analysis, outperforming existing methods across three cohorts. We further demonstrated the interpretability of AutoCancer by identifying key gene mutations associated with non-small cell lung cancer to pinpoint crucial factors at different stages and subtypes. The robustness of AutoCancer, coupled with its strong interpretability, underscores its potential for clinical applications in early cancer detection.
Ji, Han; Feng, Yuqi; Sun, Yanan
CAP: A Context-Aware Neural Predictor for NAS Technical Report
2024.
@techreport{ji2024capcontextawareneuralpredictor,
title = {CAP: A Context-Aware Neural Predictor for NAS},
author = {Han Ji and Yuqi Feng and Yanan Sun},
url = {https://arxiv.org/abs/2406.02056},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Orucu, Adam; Moradi, Farnaz; Ebrahimi, Masoumeh; Johnsson, Andreas
Towards Neural Architecture Search for Transfer Learning in 6G Networks Technical Report
2024.
@techreport{orucu2024neuralarchitecturesearchtransfer,
title = {Towards Neural Architecture Search for Transfer Learning in 6G Networks},
author = {Adam Orucu and Farnaz Moradi and Masoumeh Ebrahimi and Andreas Johnsson},
url = {https://arxiv.org/abs/2406.02333},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Dewi, Christine; Thiruvady, Dhananjay; Zaidi, Nayyar
Fruit Classification System with Deep Learning and Neural Architecture Search Technical Report
2024.
@techreport{dewi2024fruitclassificationdeeplearning,
title = {Fruit Classification System with Deep Learning and Neural Architecture Search},
author = {Christine Dewi and Dhananjay Thiruvady and Nayyar Zaidi},
url = {https://arxiv.org/abs/2406.01869},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Xue, Yu; Zha, Jiajie; Wahib, Mohamed; Ouyang, Tinghui; Wang, Xiao
Neural architecture search via similarity adaptive guidance Journal Article
In: Applied Soft Computing, vol. 162, pp. 111821, 2024, ISSN: 1568-4946.
@article{XUE2024111821,
title = {Neural architecture search via similarity adaptive guidance},
author = {Yu Xue and Jiajie Zha and Mohamed Wahib and Tinghui Ouyang and Xiao Wang},
url = {https://www.sciencedirect.com/science/article/pii/S1568494624005957},
doi = {https://doi.org/10.1016/j.asoc.2024.111821},
issn = {1568-4946},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {Applied Soft Computing},
volume = {162},
pages = {111821},
abstract = {Evolutionary neural network architecture search (ENAS) has attracted the attention of many experts due to its global optimization capabilities to automatically search for convolutional neural network architectures based on the target task. The current search space for ENAS is not to design a fully structured network, but to search for smaller cell architectures to reduce search costs. However, blind search strategies do not effectively utilize the potential experience of the population. In order to utilize the potential experience learned by the current population to guide the evolutionary search of the population, we propose a similarity guided neural network architecture search algorithm based on cell architecture, which utilizes the similarity between pairwise architectures in the population as empirical knowledge learned by the population. Our proposed algorithm provides a novel method for calculating architecture similarity, which calculates architecture similarity separately from the cell and macro-structure. Then we decouple the connections and operations in the cell and calculate connection and operation similarity separately. In addition, we propose adaptive similarity selection and binary tournament selection strategies to enhance the algorithm’s global and local search capabilities and effectively explore the search space. Finally, we design an improved single-point crossover operator to enhance the local search ability of the evolutionary operator. The experimental results show that SAGNAS is a competitive algorithm that achieves 97.44% and 81.60% in CIFAR10 and CIFAR100 with only 1.9 GPU-days spent.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Lapkovskis, Alfreds; Nefedova, Natalia; Beikmohammadi, Ali
Automatic Fused Multimodal Deep Learning for Plant Identification Technical Report
2024.
@techreport{lapkovskis2024automaticfusedmultimodaldeep,
title = {Automatic Fused Multimodal Deep Learning for Plant Identification},
author = {Alfreds Lapkovskis and Natalia Nefedova and Ali Beikmohammadi},
url = {https://arxiv.org/abs/2406.01455},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Liu, Songhua; Jin, Xin; Yang, Xingyi; Ye, Jingwen; Wang, Xinchao
StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization Technical Report
2024.
@techreport{liu2024stydestyminmaxstylizationdestylization,
title = {StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization},
author = {Songhua Liu and Xin Jin and Xingyi Yang and Jingwen Ye and Xinchao Wang},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Roberts, Nicholas; Guo, Samuel; Gao, Zhiqi; GNVV, Satya Sai Srinath Namburi; Cromp, Sonia; Wu, Chengjun; Duan, Chengyu; Sala, Frederic
Pretrained Hybrids with MAD Skills Technical Report
2024.
@techreport{roberts2024pretrainedhybridsmadskills,
title = {Pretrained Hybrids with MAD Skills},
author = {Nicholas Roberts and Samuel Guo and Zhiqi Gao and Satya Sai Srinath Namburi GNVV and Sonia Cromp and Chengjun Wu and Chengyu Duan and Frederic Sala},
url = {https://arxiv.org/abs/2406.00894},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Ericsson, Linus; Espinosa, Miguel; Yang, Chenhongyi; Antoniou, Antreas; Storkey, Amos; Cohen, Shay B.; McDonagh, Steven; Crowley, Elliot J.
einspace: Searching for Neural Architectures from Fundamental Operations Technical Report
2024.
@techreport{ericsson2024einspacesearchingneuralarchitectures,
title = {einspace: Searching for Neural Architectures from Fundamental Operations},
author = {Linus Ericsson and Miguel Espinosa and Chenhongyi Yang and Antreas Antoniou and Amos Storkey and Shay B. Cohen and Steven McDonagh and Elliot J. Crowley},
url = {https://arxiv.org/abs/2405.20838},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Li, Zhengang; Kang, Yan; Liu, Yuchen; Liu, Difan; Hinz, Tobias; Liu, Feng; Wang, Yanzhi
SNED: Superposition Network Architecture Search for Efficient Video Diffusion Model Technical Report
2024.
@techreport{li2024snedsuperpositionnetworkarchitecture,
title = {SNED: Superposition Network Architecture Search for Efficient Video Diffusion Model},
author = {Zhengang Li and Yan Kang and Yuchen Liu and Difan Liu and Tobias Hinz and Feng Liu and Yanzhi Wang},
url = {https://arxiv.org/abs/2406.00195},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Garcia, Andrew; Vega, Marco
Forecasting inflation with a framework for model and neural architecture search with tree-structured search spaces Technical Report
Banco Central de Reserva del Perú no. 2024-009, 2024.
@techreport{RePEc:rbp:wpaper:2024-009,
title = {Forecasting inflation with a framework for model and neural architecture search with tree-structured search spaces},
author = {Andrew Garcia and Marco Vega},
url = {https://EconPapers.repec.org/RePEc:rbp:wpaper:2024-009},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
number = {2024-009},
institution = {Banco Central de Reserva del Perú},
abstract = {This study automates the design of machine learning models for economic forecasting, with an application focus on Peru’s inflation. Such is achieved by employing an Automated Machine Learning (AutoML) framework that selects the best model configurations and data processing steps. This allows us to build models without manually trying out different options, saving time and potentially improving accuracy. The specific models explored are deep learning neural networks, which are machine learning models often used for complex forecasting tasks. We use two inflation forecasting schemes: one using a single model for headline inflation and another using two models one for food and energy inflation and another for inflation excluding food and energy, which are combined to predict inflation. By establishing this automated approach, we pave the way for further research on using machine learning to forecast economic data like inflation in Peru.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Zhao, Yiyang; Wang, Linnan; Guo, Tian
Multi-objective Neural Architecture Search by Learning Search Space Partitions Technical Report
2024.
@techreport{zhao2024multiobjectiveneuralarchitecturesearch,
title = {Multi-objective Neural Architecture Search by Learning Search Space Partitions},
author = {Yiyang Zhao and Linnan Wang and Tian Guo},
url = {https://arxiv.org/abs/2406.00291},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Yu, Wenbo; Fang, Hao; Chen, Bin; Sui, Xiaohang; Chen, Chuan; Wu, Hao; Xia, Shu-Tao; Xu, Ke
GI-NAS: Boosting Gradient Inversion Attacks through Adaptive Neural Architecture Search Technical Report
2024.
@techreport{yu2024ginasboostinggradientinversion,
title = {GI-NAS: Boosting Gradient Inversion Attacks through Adaptive Neural Architecture Search},
author = {Wenbo Yu and Hao Fang and Bin Chen and Xiaohang Sui and Chuan Chen and Hao Wu and Shu-Tao Xia and Ke Xu},
url = {https://arxiv.org/abs/2405.20725},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Zhao, Yiyang; Liu, Yunzhuo; Jiang, Bo; Guo, Tian
CE-NAS: An End-to-End Carbon-Efficient Neural Architecture Search Framework Technical Report
2024.
@techreport{zhao2024cenasendtoendcarbonefficientneural,
title = {CE-NAS: An End-to-End Carbon-Efficient Neural Architecture Search Framework},
author = {Yiyang Zhao and Yunzhuo Liu and Bo Jiang and Tian Guo},
url = {https://arxiv.org/abs/2406.01414},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Zhao, Jiaxuan; Jiao, Licheng; Wang, Chao; Liu, Xu; Liu, Fang; Li, Lingling; Ma, Mengru; Yang, Shuyuan
Knowledge Guided Evolutionary Transformer for Remote Sensing Scene Classification Journal Article
In: IEEE Transactions on Circuits and Systems for Video Technology, pp. 1-1, 2024.
@article{10542522,
title = {Knowledge Guided Evolutionary Transformer for Remote Sensing Scene Classification},
author = {Jiaxuan Zhao and Licheng Jiao and Chao Wang and Xu Liu and Fang Liu and Lingling Li and Mengru Ma and Shuyuan Yang},
url = {https://ieeexplore.ieee.org/abstract/document/10542522},
doi = {10.1109/TCSVT.2024.3407138},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {IEEE Transactions on Circuits and Systems for Video Technology},
pages = {1-1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Weijie; Shi, Feng; Zhang, Qianyun; Wang, Yu; Guo, Lantu; Lin, Yun; Gui, Guan
Few-Shot Specific Emitter Identification Leveraging Neural Architecture Search and Advanced Deep Transfer Learning Journal Article
In: IEEE Internet of Things Journal, pp. 1-1, 2024.
@article{10542350,
title = {Few-Shot Specific Emitter Identification Leveraging Neural Architecture Search and Advanced Deep Transfer Learning},
author = {Weijie Zhang and Feng Shi and Qianyun Zhang and Yu Wang and Lantu Guo and Yun Lin and Guan Gui},
url = {https://ieeexplore.ieee.org/abstract/document/10542350},
doi = {10.1109/JIOT.2024.3407737},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {IEEE Internet of Things Journal},
pages = {1-1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Wang, Chao; Zhao, Jiaxuan; Li, Lingling; Jiao, Licheng; Liu, Fang; Yang, Shuyuan
Automatic Graph Topology-Aware Transformer Technical Report
2024.
@techreport{wang2024automatic,
title = {Automatic Graph Topology-Aware Transformer},
author = {Chao Wang and Jiaxuan Zhao and Lingling Li and Licheng Jiao and Fang Liu and Shuyuan Yang},
url = {https://arxiv.org/abs/2405.19779},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Yang, Shangshang; Sun, Xiangkun; Xu, Ke; Liu, Yuanchao; Tian, Ye; Zhang, Xingyi
Hybrid Architecture-Based Evolutionary Robust Neural Architecture Search Journal Article
In: IEEE Transactions on Emerging Topics in Computational Intelligence, pp. 1-16, 2024.
@article{10541069,
title = {Hybrid Architecture-Based Evolutionary Robust Neural Architecture Search},
author = {Shangshang Yang and Xiangkun Sun and Ke Xu and Yuanchao Liu and Ye Tian and Xingyi Zhang},
url = {https://ieeexplore.ieee.org/abstract/document/10541069},
doi = {10.1109/TETCI.2024.3400867},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {IEEE Transactions on Emerging Topics in Computational Intelligence},
pages = {1-16},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Lin, Chengmin; Yang, Pengfei; Li, Chengcheng; Cheng, Fei; Lv, Wenkai; Wang, Zhenyi; Wang, Quan
Fine-grained complexity-driven latency predictor in hardware-aware neural architecture search using composite loss Journal Article
In: Information Sciences, vol. 676, pp. 120783, 2024, ISSN: 0020-0255.
@article{LIN2024120783,
title = {Fine-grained complexity-driven latency predictor in hardware-aware neural architecture search using composite loss},
author = {Chengmin Lin and Pengfei Yang and Chengcheng Li and Fei Cheng and Wenkai Lv and Zhenyi Wang and Quan Wang},
url = {https://www.sciencedirect.com/science/article/pii/S0020025524006972},
doi = {https://doi.org/10.1016/j.ins.2024.120783},
issn = {0020-0255},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {Information Sciences},
volume = {676},
pages = {120783},
abstract = {An efficient hardware-aware neural architecture search is crucial for automating the creation of network architectures that are optimized for resource-limited platforms. However, challenges arise owing to inaccuracies in key hardware performance metrics, notably in latency estimation. This study introduces a composite loss-based complexity-driven latency predictor, which is an innovative approach that achieves remarkable evaluation accuracy with limited training data. This reveals a robust correlation between the layer-based complexity features and network inference latency. This groundbreaking insight leverages these complex features as network architecture encodings for latency predictors, substantially enhancing the precision of latency assessments. In addition, a composite loss function is proposed that seamlessly integrates ranking and absolute performance losses. This novel approach addresses the limitations of rank-based loss methods, which often lack broader context. Incorporating a global perspective through absolute performance metrics significantly improves the generalization capabilities of the predictor across various benchmarks. Experimental results on the NAS-Bench-201, NAS-Bench-101, and MobileNetV3 benchmarks underscore the effectiveness of the predictor. For instance, in the NAS-Bench-201 evaluation, the predictor demonstrates a notable increase in Kendall's tau correlation, from 0.738 to 0.9733. These findings highlight the enhanced accuracy of the proposed approach with far-reaching implications for optimizing network structures on resource-limited platforms.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Sarah, Anthony; Sridhar, Sharath Nittur; Szankin, Maciej; Sundaresan, Sairam
LLaMA-NAS: Efficient Neural Architecture Search for Large Language Models Technical Report
2024.
@techreport{sarah2024llamanas,
title = {LLaMA-NAS: Efficient Neural Architecture Search for Large Language Models},
author = {Anthony Sarah and Sharath Nittur Sridhar and Maciej Szankin and Sairam Sundaresan},
url = {https://arxiv.org/abs/2405.18377},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Liu, Yao; Cheng, Jianyuan; Lü, Qingtian; Liu, Zaibin; Lu, Jingjin; Fan, Zhenyu; Zhang, Lianzhi
Deep learning for geological mapping in the overburden area Journal Article
In: Frontiers in Earth Science, vol. 12, 2024, ISSN: 2296-6463.
@article{10.3389/feart.2024.1407173,
title = {Deep learning for geological mapping in the overburden area},
author = {Yao Liu and Jianyuan Cheng and Qingtian Lü and Zaibin Liu and Jingjin Lu and Zhenyu Fan and Lianzhi Zhang},
url = {https://www.frontiersin.org/articles/10.3389/feart.2024.1407173},
doi = {10.3389/feart.2024.1407173},
issn = {2296-6463},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {Frontiers in Earth Science},
volume = {12},
abstract = {This paper aims to achieve bedrock geologic mapping in the overburden area using big data, distributed computing, and deep learning techniques. First, the satellite Bouguer gravity anomaly with a resolution of 2′×2′ in the range of E66^{°}-E96^{°}, N40^{°}-N55^{°} and 1:5000000 Asia-European geological map are used to design a dataset for bedrock prediction. Then, starting from the gravity anomaly formula in the spherical coordinate system, we deduce the non-linear functional between rock density ρ and rock mineral composition m, content p, buried depth h, diagenesis time t and other variables. We analyze the feasibility of using deep neural network to approximate the above nonlinear generalization. The problem of solving deep neural network parameters is transformed into a non-convex optimization problem. We give an iterative, gradient descent-based solution algorithm for the non-convex optimization problem. Utilizing neural architecture search (NAS) and human-designed approach, we propose a geological-geophysical mapping network (GGMNet). The dataset for the network consists of both gravity anomaly and a priori geological information. The network has fast convergence speed and stable iteration during the training process. It also has better performance than a single neural network search or human-designed architectures, with the mean pixel accuracy (MAP) = 63.1% and the frequency weighted intersection over union (FWIoU) = 42.88. Finally, the GGMNet is used to predict the rock distribution of the Junggar Basin.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Yang; Li, Mingying; Pan, Huilin; Liu, Moyun; Zhou, Yang
Efficient Visual Fault Detection for Freight Train via Neural Architecture Search with Data Volume Robustness Technical Report
2024.
@techreport{zhang2024efficient,
title = {Efficient Visual Fault Detection for Freight Train via Neural Architecture Search with Data Volume Robustness},
author = {Yang Zhang and Mingying Li and Huilin Pan and Moyun Liu and Yang Zhou},
url = {https://arxiv.org/abs/2405.17004},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Li, Peiwen; Wang, Xin; Zhang, Zeyang; Qin, Yijian; Zhang, Ziwei; Wang, Jialong; Li, Yang; Zhu, Wenwu
Causal-Aware Graph Neural Architecture Search under Distribution Shifts Technical Report
2024.
@techreport{li2024causalaware,
title = {Causal-Aware Graph Neural Architecture Search under Distribution Shifts},
author = {Peiwen Li and Xin Wang and Zeyang Zhang and Yijian Qin and Ziwei Zhang and Jialong Wang and Yang Li and Wenwu Zhu},
url = {https://arxiv.org/abs/2405.16489},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Shao, Jun-Min; Zeng, Guo-Qiang; Lu, Kang-Di; Geng, Guang-Gang; Weng, Jian
Automated federated learning for intrusion detection of industrial control systems based on evolutionary neural architecture search Journal Article
In: Computers & Security, vol. 143, pp. 103910, 2024, ISSN: 0167-4048.
@article{SHAO2024103910,
title = {Automated federated learning for intrusion detection of industrial control systems based on evolutionary neural architecture search},
author = {Jun-Min Shao and Guo-Qiang Zeng and Kang-Di Lu and Guang-Gang Geng and Jian Weng},
url = {https://www.sciencedirect.com/science/article/pii/S0167404824002128},
doi = {https://doi.org/10.1016/j.cose.2024.103910},
issn = {0167-4048},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {Computers & Security},
volume = {143},
pages = {103910},
abstract = {In recent years, federated learning has been applied to the security of the Internet of Things and Industrial Control Systems (ICS) due to its advantages in communication cost and privacy preserving. However, the existing deep learning models used in federated learning-based intrusion detection systems (IDS) are manually designed by relying on the extensive experiences of designers and are not applicable in different scenarios flexibly. In this paper, we make the first attempt to automatically design a lightweight federated learning model termed as Fed-GA-CNN-IDS for the IDS issue in ICS by evolutionary neural architecture search (NAS). Five lightweight neural architectures of Convolutional Neural Network (CNN) are considered as the basic blocks to be combined and optimized in federated NAS for ICS intrusion detection. An efficient discrete encoding strategy is developed to describe the combination of five basic lightweight blocks and the specific discrete evolutionary operations under the framework of genetic algorithm (GA) are designed elaborately to guide the evolutionary process of an automated federated learning model. The experimental results on three widely-used intrusion detection datasets in ICSs such as Gas Pipeline, SWaT and WADI, demonstrate that the proposed Fed-GA-CNN-IDS method can obtain more lightweight models and better or at least competitive intrusion detection performance than three state-of-the-art manually-designed federated learning-based IDS methods, two federated NAS methods originally developed for traditional image classification tasks, and four lightweight IDS methods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Liu, Zhiying; Li, Yuancheng; Xu, Shuang; Wang, Qingle; Li, Jianbin
NPformer based static FDIAs detection for state-of-charge estimation of battery energy storage systems in smart distribution networks Journal Article
In: Journal of Energy Storage, vol. 92, pp. 112225, 2024, ISSN: 2352-152X.
@article{LIU2024112225,
title = {NPformer based static FDIAs detection for state-of-charge estimation of battery energy storage systems in smart distribution networks},
author = {Zhiying Liu and Yuancheng Li and Shuang Xu and Qingle Wang and Jianbin Li},
url = {https://www.sciencedirect.com/science/article/pii/S2352152X24018115},
doi = {https://doi.org/10.1016/j.est.2024.112225},
issn = {2352-152X},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {Journal of Energy Storage},
volume = {92},
pages = {112225},
abstract = {State of charge (SoC) estimation of battery energy storage systems is essential for ensuring the security, stability, and SoC estimation of battery energy storage systems (BESSs) in smart distribution networks (SDNs) is critical to the control and operation of power systems. False data injection attacks (FDIAs) can escape bad data detection, thus affecting the SoC estimation of BESSs. Existing model-based detection methods require the manual calculation of thresholds, so this paper proposes an NPformer-based method for detecting FDIAs on BESSs in SDNs. The method constructs a pyramidal attention model based on a multi-scale C-ary tree to explore the multi-resolution representation of time series. Then, the network structure of the NPformer is automatically searched with neural architecture search to obtain an effective detection model. We evaluate the performance of the proposed method using the IEEE13 and IEEE33 bus systems with BESSs. Compared to Transformer and Long Short Term Memory autoencoder, the experimental results show that the proposed method has higher detection accuracy for close-time use. The proposed method has a detection accuracy of 97 %.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Kaur, Tanvir; Kamboj, Shivani; Singh, Lovedeep; Tamanna,
Advanced YOLO-NAS-Based Detection and Screening of Brain Tumors Using Medical Diagnostic Proceedings Article
In: 2024 2nd International Conference on Artificial Intelligence and Machine Learning Applications Theme: Healthcare and Internet of Things (AIMLA), pp. 1-6, 2024.
@inproceedings{10531625,
title = {Advanced YOLO-NAS-Based Detection and Screening of Brain Tumors Using Medical Diagnostic},
author = {Tanvir Kaur and Shivani Kamboj and Lovedeep Singh and Tamanna},
url = {https://ieeexplore.ieee.org/abstract/document/10531625},
doi = {10.1109/AIMLA59606.2024.10531625},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
booktitle = {2024 2nd International Conference on Artificial Intelligence and Machine Learning Applications Theme: Healthcare and Internet of Things (AIMLA)},
pages = {1-6},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Sheng, Huangxu; Liu, Hai-Lin; Lai, Yutao; Zeng, Shaoda; Chen, Lei
Hierarchical Encoding Method for Retinal Segmentation Evolutionary Architecture Search Journal Article
In: IEEE Transactions on Emerging Topics in Computational Intelligence, pp. 1-14, 2024.
@article{10536618,
title = {Hierarchical Encoding Method for Retinal Segmentation Evolutionary Architecture Search},
author = {Huangxu Sheng and Hai-Lin Liu and Yutao Lai and Shaoda Zeng and Lei Chen},
url = {https://ieeexplore.ieee.org/abstract/document/10536618},
doi = {10.1109/TETCI.2024.3395540},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {IEEE Transactions on Emerging Topics in Computational Intelligence},
pages = {1-14},
keywords = {},
pubstate = {published},
tppubtype = {article}
}