Maintained by Difan Deng and Marius Lindauer.
The following list considers papers related to neural architecture search. It is by no means complete. If you miss a paper on the list, please let us know.
Please note that although NAS methods steadily improve, the quality of empirical evaluations in this field are still lagging behind compared to other areas in machine learning, AI and optimization. We would therefore like to share some best practices for empirical evaluations of NAS methods, which we believe will facilitate sustained and measurable progress in the field. If you are interested in a teaser, please read our blog post or directly jump to our checklist.
Transformers have gained increasing popularity in different domains. For a comprehensive list of papers focusing on Neural Architecture Search for Transformer-Based spaces, the awesome-transformer-search repo is all you need.
2025
Wu, Ying; Yan, Zheyu; Yin, Xunzhao; He, Lenian; Zhuo, Cheng
ANAS: Software–hardware co-design of approximate neural network accelerators via neural architecture search Journal Article
In: Integration, vol. 104, pp. 102469, 2025, ISSN: 0167-9260.
@article{WU2025102469,
title = {ANAS: Software–hardware co-design of approximate neural network accelerators via neural architecture search},
author = {Ying Wu and Zheyu Yan and Xunzhao Yin and Lenian He and Cheng Zhuo},
url = {https://www.sciencedirect.com/science/article/pii/S0167926025001269},
doi = {https://doi.org/10.1016/j.vlsi.2025.102469},
issn = {0167-9260},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Integration},
volume = {104},
pages = {102469},
abstract = {Deep Neural Networks (DNNs) are prevalent solutions for perception tasks, with energy efficiency being particularly critical for deployment on edge platforms. Various studies have proposed efficient DNN edge deployment solutions; however, an important aspect – approximate computing – has been overlooked. Current research primarily focuses on designing approximate circuits for specific DNN models, neglecting the influence of DNN architecture design. To address this gap, this paper proposes a software–hardware co-exploration framework for approximate DNN accelerator design that jointly explores approximate multipliers and neural architectures. This framework, termed Approximate Neural Architecture Search (ANAS), tackles two main challenges: (1) efficiently evaluating the impact of approximate multipliers on application performance and accelerator design for each sample, and (2) effectively navigating a large design space to identify optimal configurations. The framework employs a recurrent neural network-based reinforcement learning algorithm to identify an optimal approximate multiplier-DNN architecture pair that balances DNN accuracy and hardware cost. Experimental results demonstrate that ANAS achieves comparable accuracy while reducing energy consumption and latency by up to 40% compared to state-of-the-art NAS-based methods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Son, Seok Bin; Kim, Joongheon
Quantum Circuit Structure Optimization for Quantum Reinforcement Learning Bachelor Thesis
2025.
@bachelorthesis{son2025quantumcircuitstructureoptimization,
title = {Quantum Circuit Structure Optimization for Quantum Reinforcement Learning},
author = {Seok Bin Son and Joongheon Kim},
url = {https://arxiv.org/abs/2507.00589},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {bachelorthesis}
}
Świderski, Szymon; Jastrzębska, Agnieszka
Data classification with dynamically growing and shrinking neural networks Journal Article
In: Journal of Computational Science, vol. 91, pp. 102660, 2025, ISSN: 1877-7503.
@article{SWIDERSKI2025102660,
title = {Data classification with dynamically growing and shrinking neural networks},
author = {Szymon Świderski and Agnieszka Jastrzębska},
url = {https://www.sciencedirect.com/science/article/pii/S1877750325001371},
doi = {https://doi.org/10.1016/j.jocs.2025.102660},
issn = {1877-7503},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Journal of Computational Science},
volume = {91},
pages = {102660},
abstract = {The issue of data-driven neural network model construction is one of the core problems in the domain of Artificial Intelligence. A standard approach assumes a fixed architecture with trainable weights. A conceptually more advanced assumption is that we not only train the weights but also find out the optimal model architecture. We present a new method that realizes just that. This article is an extended version of our conference paper titled “Dynamic Growing and Shrinking of Neural Networks with Monte Carlo Tree Search (Świderski and Jastrzebska, 2024). In the paper, we show in detail how to create a neural network with a procedure that allows dynamic shrinking and growing of the model while it is being trained. The decision-making mechanism for the architectural design is governed by the Monte Carlo tree search procedure, which simulates network behavior and allows comparing several candidate architecture changes to choose the best one. The proposed method was validated using both visual and time series datasets, demonstrating its particular effectiveness in multivariate time series classification. This is attributed to the architecture’s ability to adapt dynamically, allowing independent modifications for each time series. To enhance the reproducibility of our method, we publish open-source code of the proposed method. It was prepared in Python. Experimental evaluations in visual pattern and multivariate time series classification tasks revealed highly promising performance, underscoring the method’s robustness and adaptability.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Narduzzi, Simon; Vuagniaux, Rémy; Sharma, Kishan; Liu, Shih-Chii; Dunbar, L. Andrea
Steerable Zero-Shot Neural Architecture Search for Efficient Edge Inference Proceedings Article
In: 2025 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1-5, 2025.
@inproceedings{11044278,
title = {Steerable Zero-Shot Neural Architecture Search for Efficient Edge Inference},
author = {Simon Narduzzi and Rémy Vuagniaux and Kishan Sharma and Shih-Chii Liu and L. Andrea Dunbar},
url = {https://ieeexplore.ieee.org/abstract/document/11044278},
doi = {10.1109/ISCAS56072.2025.11044278},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 IEEE International Symposium on Circuits and Systems (ISCAS)},
pages = {1-5},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
An, Yang; Zhang, Changsheng; Shao, Jintao; Yan, Yuxiao; Sun, Baiqing
An Efficient Evolutionary Neural Architecture Search Algorithm Without Training Journal Article
In: Biomimetics, vol. 10, no. 7, 2025, ISSN: 2313-7673.
@article{biomimetics10070421,
title = {An Efficient Evolutionary Neural Architecture Search Algorithm Without Training},
author = {Yang An and Changsheng Zhang and Jintao Shao and Yuxiao Yan and Baiqing Sun},
url = {https://www.mdpi.com/2313-7673/10/7/421},
doi = {10.3390/biomimetics10070421},
issn = {2313-7673},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Biomimetics},
volume = {10},
number = {7},
abstract = {Neural Architecture Search (NAS) has made significant advancements in autonomously constructing high-performance network architectures, capturing extensive attention. However, a key challenge of existing NAS approaches is the intensive performance evaluation, leading to significant time and computational resource consumption. In this paper, we propose an efficient Evolutionary Neural Architecture Search (ENAS) method to address this issue. Specifically, in order to accelerate the convergence speed of the algorithm and shorten the search time, thereby avoiding blind searching in the early stages of the algorithm, we drew on the principles of biometrics to redesign the interaction between individuals in the evolutionary algorithm. By making full use of the information carried by individuals, we promoted information exchange and optimization between individuals and their neighbors, thereby improving local search capabilities while maintaining global search capabilities. Furthermore, to accelerate the evaluation process and minimize computational resource consumption, a multi-metric training-free evaluator is introduced to assess network performance, bypassing the resource-intensive training phase, and the adopted multi-metric combination method further solves the ranking offset problem. To evaluate the performance of the proposed method, we conduct experiments on two widely adopted benchmarks, NAS-Bench-101 and NAS-Bench-201. Comparative analysis with state-of-the-art algorithms shows that our proposed method identifies network architectures with comparable or better performance while requiring significantly less time.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Gharge, Samiksha Dnyaneshwar; Katurde, Atharva Digamber; Shinde, Bhakti Bharat; Karpe, Piyush Pandurang; Lohar, Anil Tukaram
Enhanced Classification of Astronomical Images Using Optimized Neural Networks via NAS Proceedings Article
In: 2025 International Conference on Knowledge Engineering and Communication Systems (ICKECS), pp. 1-6, 2025.
@inproceedings{11035681,
title = {Enhanced Classification of Astronomical Images Using Optimized Neural Networks via NAS},
author = {Samiksha Dnyaneshwar Gharge and Atharva Digamber Katurde and Bhakti Bharat Shinde and Piyush Pandurang Karpe and Anil Tukaram Lohar},
url = {https://ieeexplore.ieee.org/abstract/document/11035681},
doi = {10.1109/ICKECS65700.2025.11035681},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 International Conference on Knowledge Engineering and Communication Systems (ICKECS)},
pages = {1-6},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Jiang, Zhijian; Zhao, Taoran; Wang, Menglei; Zhou, Junfeng; Deng, Ziwei; Lin, Xinhua
An Aeromagnetic Compensation Method Based on Differentiable Architecture Search-Guided Physics-Informed Neural Network Journal Article
In: IEEE Geoscience and Remote Sensing Letters, pp. 1-1, 2025.
@article{11052264,
title = {An Aeromagnetic Compensation Method Based on Differentiable Architecture Search-Guided Physics-Informed Neural Network},
author = {Zhijian Jiang and Taoran Zhao and Menglei Wang and Junfeng Zhou and Ziwei Deng and Xinhua Lin},
doi = {10.1109/LGRS.2025.3583559},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Geoscience and Remote Sensing Letters},
pages = {1-1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Cheng, Junyan; Clark, Peter; Richardson, Kyle
Language Modeling by Language Models Technical Report
2025.
@techreport{cheng2025languagemodelinglanguagemodels,
title = {Language Modeling by Language Models},
author = {Junyan Cheng and Peter Clark and Kyle Richardson},
url = {https://arxiv.org/abs/2506.20249},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Yao, Junjie; Zhu, Guijie; Zhuang, Jiafan; Hao, Zhifeng; Li, Wenji; Fan, Zhun
U-Shaped Network Based on Particle Swarm Optimization for Retinal Vessel Segmentation Proceedings Article
In: 2025 IEEE Congress on Evolutionary Computation (CEC), pp. 1-8, 2025.
@inproceedings{11043060,
title = {U-Shaped Network Based on Particle Swarm Optimization for Retinal Vessel Segmentation},
author = {Junjie Yao and Guijie Zhu and Jiafan Zhuang and Zhifeng Hao and Wenji Li and Zhun Fan},
url = {https://ieeexplore.ieee.org/abstract/document/11043060},
doi = {10.1109/CEC65147.2025.11043060},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 IEEE Congress on Evolutionary Computation (CEC)},
pages = {1-8},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Hoang, Anh Tuan; Viharos, Zsolt János
Robust Superiority of the MICS-EFS Configuration Search Algorithm Through Modular Extensions of Complex Neural Architectures Proceedings Article
In: 2025 IEEE 19th International Symposium on Applied Computational Intelligence and Informatics (SACI), pp. 1-6, 2025.
@inproceedings{11030092,
title = {Robust Superiority of the MICS-EFS Configuration Search Algorithm Through Modular Extensions of Complex Neural Architectures},
author = {Anh Tuan Hoang and Zsolt János Viharos},
url = {https://ieeexplore.ieee.org/abstract/document/11030092},
doi = {10.1109/SACI66288.2025.11030092},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 IEEE 19th International Symposium on Applied Computational Intelligence and Informatics (SACI)},
pages = {1-6},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Mei, Aohan; Li, Nan; Zhang, Tian; Ma, Lianbo
Evolutionary Graph Fusion Architecture Search Proceedings Article
In: 2025 IEEE Congress on Evolutionary Computation (CEC), pp. 1-8, 2025.
@inproceedings{11043065,
title = {Evolutionary Graph Fusion Architecture Search},
author = {Aohan Mei and Nan Li and Tian Zhang and Lianbo Ma},
url = {https://ieeexplore.ieee.org/abstract/document/11043065},
doi = {10.1109/CEC65147.2025.11043065},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 IEEE Congress on Evolutionary Computation (CEC)},
pages = {1-8},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Zhang, Kainan; Liu, Chang
Multi-Objective Evolutionary Neural Architecture Search for Material Microstructure Segmentation Proceedings Article
In: 2025 IEEE Congress on Evolutionary Computation (CEC), pp. 1-8, 2025.
@inproceedings{11043095,
title = {Multi-Objective Evolutionary Neural Architecture Search for Material Microstructure Segmentation},
author = {Kainan Zhang and Chang Liu},
url = {https://ieeexplore.ieee.org/abstract/document/11043095},
doi = {10.1109/CEC65147.2025.11043095},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 IEEE Congress on Evolutionary Computation (CEC)},
pages = {1-8},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Hu, Hao; Wang, Tao; Li, Yanan; Zhang, Zhao; Peng, Xinggunag
Neural architecture search-based BP neural network for underwater glider motion parameter generation Proceedings Article
In: 2025 IEEE Congress on Evolutionary Computation (CEC), pp. 1-8, 2025.
@inproceedings{11042943,
title = {Neural architecture search-based BP neural network for underwater glider motion parameter generation},
author = {Hao Hu and Tao Wang and Yanan Li and Zhao Zhang and Xinggunag Peng},
url = {https://ieeexplore.ieee.org/abstract/document/11042943},
doi = {10.1109/CEC65147.2025.11042943},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 IEEE Congress on Evolutionary Computation (CEC)},
pages = {1-8},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Vu, Trung Hieu; Nguyen, Tien Thanh; Elyan, Eyad
An Evolutionary Neural Architecture Search-Based Approach for Time Series Forecasting Proceedings Article
In: 2025 IEEE Congress on Evolutionary Computation (CEC), pp. 1-8, 2025.
@inproceedings{11043002,
title = {An Evolutionary Neural Architecture Search-Based Approach for Time Series Forecasting},
author = {Trung Hieu Vu and Tien Thanh Nguyen and Eyad Elyan},
url = {https://ieeexplore.ieee.org/abstract/document/11043002},
doi = {10.1109/CEC65147.2025.11043002},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 IEEE Congress on Evolutionary Computation (CEC)},
pages = {1-8},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Zhang, Xiaolei; Xue, Yu; Neri, Ferrante
Embedding Comparator for Evolutionary Neural Architecture Search via Contrastive Learning Proceedings Article
In: 2025 IEEE Congress on Evolutionary Computation (CEC), pp. 1-8, 2025.
@inproceedings{11043127,
title = {Embedding Comparator for Evolutionary Neural Architecture Search via Contrastive Learning},
author = {Xiaolei Zhang and Yu Xue and Ferrante Neri},
url = {https://ieeexplore.ieee.org/abstract/document/11043127},
doi = {10.1109/CEC65147.2025.11043127},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 IEEE Congress on Evolutionary Computation (CEC)},
pages = {1-8},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Jia, Xiaogang; Jian, Songlei; Tan, Yusong; Che, Yonggang; Chen, Wei; Liang, Zhengfa; He, Yulin
Hierarchical Neural Architecture Search for Fast and Accurate Depth Completion Proceedings Article
In: Proceedings of the 2025 International Conference on Multimedia Retrieval, pp. 569–578, Association for Computing Machinery, Chicago, IL, USA, 2025, ISBN: 9798400718779.
@inproceedings{10.1145/3731715.3733357,
title = {Hierarchical Neural Architecture Search for Fast and Accurate Depth Completion},
author = {Xiaogang Jia and Songlei Jian and Yusong Tan and Yonggang Che and Wei Chen and Zhengfa Liang and Yulin He},
url = {https://doi.org/10.1145/3731715.3733357},
doi = {10.1145/3731715.3733357},
isbn = {9798400718779},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Proceedings of the 2025 International Conference on Multimedia Retrieval},
pages = {569–578},
publisher = {Association for Computing Machinery},
address = {Chicago, IL, USA},
series = {ICMR '25},
abstract = {Feature fusion is the critical component in the depth completion task. Current approaches mainly utilize manually designed fusion modules to construct depth completion networks, but they generally face the following two problems: 1. The feature fusion modules at different resolutions are invariant, requiring the modules to have multi-scale generalization. 2. The modules themselves are complex, and additional branches are needed to enhance features and for post-processing optimization. Repeated modules and additional branches lead to network redundancy and increased computational costs. To address these challenges, we design a depth completion network based on neural architecture search. We define the search space based on cells and employ machine learning to search for different network structures at each resolution to construct feature fusion modules. Meanwhile, we optimize the complex pruning process in Hierarchical Neural Architecture Search (HNAS) by defining distinct cell units to transform convex coefficients, avoiding multiple feature fusion units and parameter files. By dynamically adjusting convex coefficient gradients during training, we eliminate the retraining process. We conduct extensive validation experiments on indoor and outdoor datasets. Our proposed network ranks first in accuracy among methods with a runtime of less than 100ms. It achieves competitive performance with state-of-the-art methods in just 19.8ms and reaches the Pareto optimal solution in terms of speed and accuracy.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Liao, Peng; Wang, Xilu; Jin, Yaochu; Sun, Chaoli; Du, Wenli
Surrogate-Assisted Evolutionary Neural Architecture Search with Architecture Knowledge Transfer Proceedings Article
In: 2025 IEEE Congress on Evolutionary Computation (CEC), pp. 1-8, 2025.
@inproceedings{11043035,
title = {Surrogate-Assisted Evolutionary Neural Architecture Search with Architecture Knowledge Transfer},
author = {Peng Liao and Xilu Wang and Yaochu Jin and Chaoli Sun and Wenli Du},
doi = {10.1109/CEC65147.2025.11043035},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 IEEE Congress on Evolutionary Computation (CEC)},
pages = {1-8},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Bessalah, Aniss; Abdelmoumen, Hatem Mohamed; Benatchba, Karima; Benmeziane, Hadjer
AnalogNAS-Bench: A NAS Benchmark for Analog In-Memory Computing Technical Report
2025.
@techreport{bessalah2025analognasbenchnasbenchmarkanalog,
title = {AnalogNAS-Bench: A NAS Benchmark for Analog In-Memory Computing},
author = {Aniss Bessalah and Hatem Mohamed Abdelmoumen and Karima Benatchba and Hadjer Benmeziane},
url = {https://arxiv.org/abs/2506.18495},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Aljoud, Mamdouh; Tavares, Gabriel Marques; Leiber, Collin; Seidl, Thomas
DCMatch - Identify Matching Architectures in Deep Clustering Through Meta-learning Proceedings Article
In: Wu, Xintao; Spiliopoulou, Myra; Wang, Can; Kumar, Vipin; Cao, Longbing; Wu, Yanqiu; Yao, Yu; Wu, Zhangkai (Ed.): Advances in Knowledge Discovery and Data Mining, pp. 213–224, Springer Nature Singapore, Singapore, 2025, ISBN: 978-981-96-8170-9.
@inproceedings{10.1007/978-981-96-8170-9_17,
title = {DCMatch - Identify Matching Architectures in Deep Clustering Through Meta-learning},
author = {Mamdouh Aljoud and Gabriel Marques Tavares and Collin Leiber and Thomas Seidl},
editor = {Xintao Wu and Myra Spiliopoulou and Can Wang and Vipin Kumar and Longbing Cao and Yanqiu Wu and Yu Yao and Zhangkai Wu},
isbn = {978-981-96-8170-9},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Advances in Knowledge Discovery and Data Mining},
pages = {213–224},
publisher = {Springer Nature Singapore},
address = {Singapore},
abstract = {The effectiveness of deep clustering algorithms like Deep Embedded Clustering (DEC) is heavily influenced by the architecture of the neural network employed. However, selecting an optimal architecture is challenging due to the absence of labels in clustering tasks, which makes traditional Neural Architecture Search (NAS) methods unsuitable. To address this, we propose a novel dataset characterization method specifically tailored for image datasets, combining deep-learning-based and statistical feature extraction techniques. By utilizing features extracted from a small subset of images, our method effectively captures both high-level semantic and low-level statistical properties of the data. These dataset characteristics are then employed in a meta-learning framework to recommend autoencoder architectures likely to outperform default configurations. Extensive experiments on 20 image datasets validate the robustness of our approach, achieving improved clustering performance on 16 datasets compared to the baseline configuration. We make our code available here: https://github.com/mamdouhJ/DCMatch.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Shawi, Radwa El
AgingFedNAS: Aging Evolution Federated Deep Learning for Architecture and Hyperparameter Search Proceedings Article
In: Wu, Xintao; Spiliopoulou, Myra; Wang, Can; Kumar, Vipin; Cao, Longbing; Wu, Yanqiu; Yao, Yu; Wu, Zhangkai (Ed.): Advances in Knowledge Discovery and Data Mining, pp. 107–119, Springer Nature Singapore, Singapore, 2025, ISBN: 978-981-96-8173-0.
@inproceedings{10.1007/978-981-96-8173-0_9,
title = {AgingFedNAS: Aging Evolution Federated Deep Learning for Architecture and Hyperparameter Search},
author = {Radwa El Shawi},
editor = {Xintao Wu and Myra Spiliopoulou and Can Wang and Vipin Kumar and Longbing Cao and Yanqiu Wu and Yu Yao and Zhangkai Wu},
url = {https://link.springer.com/chapter/10.1007/978-981-96-8173-0_9},
isbn = {978-981-96-8173-0},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Advances in Knowledge Discovery and Data Mining},
pages = {107–119},
publisher = {Springer Nature Singapore},
address = {Singapore},
abstract = {Despite advancements in Automatic Machine Learning (AutoML), industries encounter challenges in implementation due to data privacy concerns and the costs of centralized data storage. Federated Learning (FL) provides a decentralized approach, allowing multiple clients to collaboratively train models without sharing their datasets. However, many existing FL techniques utilize pre-defined model architectures from centralized environments, which may not be optimal for the non-iid data distributions commonly found among FL clients. This paper introduces AgingFedNAS, a framework designed to automate model design in FL by employing an evolutionary approach to jointly optimize neural architectures and hyperparameters. Comprehensive experiments conducted on heterogeneous data splits from CIFAR-10, Shakespeare, FEMNIST, Tiny-ImageNet, and a medical breast density classification dataset demonstrate that AgingFedNAS outperforms state-of-the-art FL frameworks, including FedAvg, FEATHERS, FedEx, and FedNAS, particularly in non-iid conditions. Notably, AgingFedNAS achieves an accuracy of 89.8% on CIFAR-10, exceeding the best baseline, FEATHERS, by 3.78%. In the breast density classification task, it surpasses FedNAS by 1.3%, achieving up to 3.2% higher accuracy for specific clients under non-iid scenarios. Additionally, in highly heterogeneous data environments, AgingFedNAS shows a 2.3% accuracy improvement on CIFAR-10 compared to the top-performing baseline.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Ji, Zipeng; Zhu, Guanghui; Yuan, Chunfeng; Huang, Yihua
RZ-NAS: Enhancing LLM-guided Neural Architecture Search via Reflective Zero-Cost Strategy Proceedings Article
In: Forty-second International Conference on Machine Learning, 2025.
@inproceedings{<LineBreak>ji2025rznas,
title = {RZ-NAS: Enhancing LLM-guided Neural Architecture Search via Reflective Zero-Cost Strategy},
author = {Zipeng Ji and Guanghui Zhu and Chunfeng Yuan and Yihua Huang},
url = {https://openreview.net/forum?id=9UExQpH078},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Forty-second International Conference on Machine Learning},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Toshikawa, Yuto; Iijima, Ryo; Mori, Tatsuya
Automated Exploration of Optimal Neural Network Structures for Deepfake Detection Proceedings Article
In: Adi, Kamel; Bourdeau, Simon; Durand, Christel; Tong, Valérie Viet Triem; Dulipovici, Alina; Kermarrec, Yvon; Garcia-Alfaro, Joaquin (Ed.): Foundations and Practice of Security, pp. 108–120, Springer Nature Switzerland, Cham, 2025, ISBN: 978-3-031-87496-3.
@inproceedings{10.1007/978-3-031-87496-3_8,
title = {Automated Exploration of Optimal Neural Network Structures for Deepfake Detection},
author = {Yuto Toshikawa and Ryo Iijima and Tatsuya Mori},
editor = {Kamel Adi and Simon Bourdeau and Christel Durand and Valérie Viet Triem Tong and Alina Dulipovici and Yvon Kermarrec and Joaquin Garcia-Alfaro},
url = {https://link.springer.com/chapter/10.1007/978-3-031-87496-3_8},
isbn = {978-3-031-87496-3},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Foundations and Practice of Security},
pages = {108–120},
publisher = {Springer Nature Switzerland},
address = {Cham},
abstract = {The proliferation of Deepfake technology has raised concerns about its potential misuse for malicious purposes, such as defaming celebrities or causing political unrest. While existing methods have reported high accuracy in detecting Deepfakes, challenges remain in adapting to the rapidly evolving Deepfake technology and developing efficient and effective detectors. In this paper, we propose a novel approach to address these challenges by utilizing advanced Neural Architecture Search (NAS) methods, specifically focusing on DARTS, PC-DARTS, and DU-DARTS. Our experimental results demonstrate that the PC-DARTS method achieves the highest test AUC of 0.88 among the techniques investigated, with a learning time of only 2.86 GPU days. This highlights the efficiency and effectiveness of our approach in automatically building Deepfake detection models. Moreover, the models using NAS exhibit competitive performance compared to state-of-the-art architectures such as XceptionNet, EfficientNet, and MobileNet. Our results suggest that the automatic search process using advanced NAS methods can quickly and easily construct adaptive and high-performance Deepfake detection models, indicating a new and promising direction for combating the ever-evolving Deepfake technology.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
García, Jesús Leopoldo Llano; Monroy, Raúl; Hernández, Víctor Adrián Sosa; Deb, Kalyanmoy
In: IEEE Access, vol. 13, pp. 107187-107203, 2025.
@article{11045675,
title = {Beyond Performance: Designing a Super-Resolution Architecture Search Space and a Hybrid Multi-Objective Approach for Neural Architecture Optimization},
author = {Jesús Leopoldo Llano García and Raúl Monroy and Víctor Adrián Sosa Hernández and Kalyanmoy Deb},
url = {https://ieeexplore.ieee.org/abstract/document/11045675},
doi = {10.1109/ACCESS.2025.3581919},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Access},
volume = {13},
pages = {107187-107203},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Shu, Xin; Niu, Mengxuan; Zhang, Yi; Luo, Wei; Zhou, Renjie
Neural Architecture Search Generated Phase Retrieval Net for Real-Time Off-Axis Quantitative Phase Imaging Journal Article
In: IEEE Photonics Technology Letters, vol. 37, no. 18, pp. 1069-1072, 2025.
@article{11039840,
title = {Neural Architecture Search Generated Phase Retrieval Net for Real-Time Off-Axis Quantitative Phase Imaging},
author = {Xin Shu and Mengxuan Niu and Yi Zhang and Wei Luo and Renjie Zhou},
url = {https://ieeexplore.ieee.org/abstract/document/11039840},
doi = {10.1109/LPT.2025.3581063},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Photonics Technology Letters},
volume = {37},
number = {18},
pages = {1069-1072},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Rajeev, Shreyas; Babu, B Sathish
Finding Optimal Kernel Size and Dimension in Convolutional Neural Networks An Architecture Optimization Approach Technical Report
2025.
@techreport{rajeev2025findingoptimalkernelsize,
title = {Finding Optimal Kernel Size and Dimension in Convolutional Neural Networks An Architecture Optimization Approach},
author = {Shreyas Rajeev and B Sathish Babu},
url = {https://arxiv.org/abs/2506.14846},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Xu, Jingjing; Yang, Zijian; Zeyer, Albert; Beck, Eugen; Schlueter, Ralf; Ney, Hermann
Dynamic Acoustic Model Architecture Optimization in Training for ASR Technical Report
2025.
@techreport{xu2025dynamicacousticmodelarchitecture,
title = {Dynamic Acoustic Model Architecture Optimization in Training for ASR},
author = {Jingjing Xu and Zijian Yang and Albert Zeyer and Eugen Beck and Ralf Schlueter and Hermann Ney},
url = {https://arxiv.org/abs/2506.13180},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Han, Honggui; Zhang, Qiyu; Li, Fangyu; Du, Yongping
Exploiting long-term markovian feature importance via dual attention for partially-connected differential architecture search Journal Article
In: Engineering Applications of Artificial Intelligence, vol. 158, pp. 111476, 2025, ISSN: 0952-1976.
@article{HAN2025111476,
title = {Exploiting long-term markovian feature importance via dual attention for partially-connected differential architecture search},
author = {Honggui Han and Qiyu Zhang and Fangyu Li and Yongping Du},
url = {https://www.sciencedirect.com/science/article/pii/S0952197625014782},
doi = {https://doi.org/10.1016/j.engappai.2025.111476},
issn = {0952-1976},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Engineering Applications of Artificial Intelligence},
volume = {158},
pages = {111476},
abstract = {—Differentiable architecture search (DARTS) is implemented as a gradient-based search method for neural architecture generation. However, DARTS suffers from unbalanced competition between unweighted and weighted operations in the search phase of the supernetwork, resulting in a collapse of the search architecture. In this paper, exploiting long-term markovian feature importance via dual attention for partially-connected differential architecture search (MA-DARTS) is proposed, to overcome the excessive accumulation of unweighted operation dominance by reducing redundant features in the supernetwork. First, spatial location attention factors for different semantic groups are learned through spatial attention. The grouped attention approach contributes to capture changes in the spatial semantic importance of search features. Secondly, the channel feature importance is obtained by learning channel attention weights without dimensionality reduction through a one-dimensional convolution factor. Finally, a Markov chain-based long-term importance feature channel selection strategy is designed. This strategy dynamically transmits key features to the search space, which improves the utilization of effective feature information in search. The experimental results demonstrate that MA-DARTS effectively suppresses the problem of excessive proportion of unweighted operations during the search process, achieving better network performance while ensuring the stability of the architecture search. Meanwhile, the proposed method achieves 0.43 %, 0.68 % and 2.2 % accuracy improvement compared to DARTS on Canadian institute for advanced research CIFAR-10, CIFAR-100 and ImageNet datasets.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Matthew, Bamidele; Pezzè, Mauro; Abrahão, Silvia; Penzenstadler, Birgit; mandal, Ashis; Nadim, Md; Schultz, Ulrik
Neural Architecture Search for AutoML in Healthcare Diagnostics: Customizing Models for Clinical Accuracy Journal Article
In: Journal of Healthcare Informatics Research, 2025.
@article{articleo,
title = {Neural Architecture Search for AutoML in Healthcare Diagnostics: Customizing Models for Clinical Accuracy},
author = {Bamidele Matthew and Mauro Pezzè and Silvia Abrahão and Birgit Penzenstadler and Ashis mandal and Md Nadim and Ulrik Schultz},
url = {https://www.researchgate.net/profile/Bamidele-Matthew-2/publication/392727682_Neural_Architecture_Search_for_AutoML_in_Healthcare_Diagnostics_Customizing_Models_for_Clinical_Accuracy/links/6850419f24267473b777985e/Neural-Architecture-Search-for-AutoML-in-Healthcare-Diagnostics-Customizing-Models-for-Clinical-Accuracy.pdf},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Journal of Healthcare Informatics Research},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Fayyazi, Arya; Kamal, Mehdi; Pedram, Massoud
MARCO: Hardware-Aware Neural Architecture Search for Edge Devices with Multi-Agent Reinforcement Learning and Conformal Prediction Filtering Technical Report
2025.
@techreport{fayyazi2025marcohardwareawareneuralarchitecture,
title = {MARCO: Hardware-Aware Neural Architecture Search for Edge Devices with Multi-Agent Reinforcement Learning and Conformal Prediction Filtering},
author = {Arya Fayyazi and Mehdi Kamal and Massoud Pedram},
url = {https://arxiv.org/abs/2506.13755},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Yan, Renao
One-Shot Neural Architecture Search with Network Similarity Directed Initialization for Pathological Image Classification Technical Report
2025.
@techreport{yan2025oneshotneuralarchitecturesearch,
title = {One-Shot Neural Architecture Search with Network Similarity Directed Initialization for Pathological Image Classification},
author = {Renao Yan},
url = {https://arxiv.org/abs/2506.14176},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Wang, Zhen; Zeng, Ruhao; Zhang, Hui; Chen, Tianyi
DeNoiseNAS: revisiting single-level optimization for efficient and stable neural architecture search Journal Article
In: Expert Systems with Applications, vol. 293, pp. 128649, 2025, ISSN: 0957-4174.
@article{WANG2025128649,
title = {DeNoiseNAS: revisiting single-level optimization for efficient and stable neural architecture search},
author = {Zhen Wang and Ruhao Zeng and Hui Zhang and Tianyi Chen},
url = {https://www.sciencedirect.com/science/article/pii/S0957417425022687},
doi = {https://doi.org/10.1016/j.eswa.2025.128649},
issn = {0957-4174},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Expert Systems with Applications},
volume = {293},
pages = {128649},
abstract = {Neural Architecture Search (NAS) has been an important topic to automate the designs of deep neural networks (DNNs). In this realm, differential NAS via multi-level optimization and zero-shot NAS are perhaps two most popular approaches. However, the existing methods typically suffer from balancing between search efficiency and stability. To address the dilemma, we revisit formulating the differentiable NAS as a single-level optimization problem and propose DeNoiseNAS to address the known coupling issue lying in single-level NAS paradigm. DeNoiseNAS employs a sophisticated search schema that establishes evaluation metrics through theoretical analysis of the neuron optimization process and progressively prunes based on these metrics to identify and remove redundant operators and noisy data samples. As a result, we achieve significant gains regarding search efficiency due to the pruning search space and dataset as well as sub-networks of higher performance due to the elimination of negative impacts from the noisy instances. Numerical experiments across extensive NAS benchmarks well validate the efficacy of DeNoiseNAS. In the DARTS and NAS-Bench-201, while maintaining a competitive search efficiency akin to zero-shot NAS, the architectures uncovered by our strategy surpass the existing state of the art in terms of accuracy, particularly on the ImageNet2012 dataset. In the AutoFormer benchmark, our method efficiently searches for high-performance architectures while consuming fewer resources.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Ayachi, Riadh; Afif, Mouna; Said, Yahia; Abdelali, Abdessalem Ben
Lightweight path aggregation network for pedestrian detection on FPGA board Journal Article
In: Journal of Parallel and Distributed Computing, vol. 204, pp. 105137, 2025, ISSN: 0743-7315.
@article{AYACHI2025105137,
title = {Lightweight path aggregation network for pedestrian detection on FPGA board},
author = {Riadh Ayachi and Mouna Afif and Yahia Said and Abdessalem Ben Abdelali},
url = {https://www.sciencedirect.com/science/article/pii/S0743731525001042},
doi = {https://doi.org/10.1016/j.jpdc.2025.105137},
issn = {0743-7315},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Journal of Parallel and Distributed Computing},
volume = {204},
pages = {105137},
abstract = {In urban environments, pedestrian safety stands as a pivotal metric dictating the accuracy and efficacy of cutting-edge technologies like Advanced Driver Assistance Systems (ADAS) and autonomous vehicles. However, the deployment of such technologies introduces various constraints, notably including the computational resources of processing boards. Therefore, constructing a robust pedestrian detection system necessitates achieving a delicate balance between performance and computational complexity. In this study, we propose the development of a lightweight Convolutional Neural Network (CNN) model specifically tailored for pedestrian detection. The backbone architecture of the model was meticulously searched using a network search engine predicated on the Multi-Objective Genetic Algorithm (NSGA-II) with a customized strategy. Notably, we shifted the search space from central processing units to Multi-Processor System-on-Chip (MPSoC) devices, aligning with the practical considerations of real-world applications. Our proposed model capitalizes on the path aggregation architecture coupled with a lightweight backbone design. The core concept revolves around the efficient transfer of high semantic features from the network's bottom to its top via the shortest path, thereby enhancing detection rates without introducing undue computational complexity. To ensure compatibility with embedded devices with limited memory, the proposed model underwent compression via quantization and pruning techniques. For rigorous evaluation, we tested the pedestrian detection model on the Xilinx ZCU 102 board, utilizing the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset for training and evaluation purposes. The reported results substantiate the efficacy of our proposed model, boasting a mean average precision (mAP) of 93.6 % alongside a commendable processing speed of 13 frames per second (FPS). These outcomes underscore the suitability of the proposed model for real-life scenarios, wherein ensuring a high level of safety remains paramount.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Chehade, Adel; Ragusa, Edoardo; Gastaldo, Paolo; Zunino, Rodolfo
Efficient Traffic Classification using HW-NAS: Advanced Analysis and Optimization for Cybersecurity on Resource-Constrained Devices Technical Report
2025.
@techreport{chehade2025efficienttrafficclassificationusing,
title = {Efficient Traffic Classification using HW-NAS: Advanced Analysis and Optimization for Cybersecurity on Resource-Constrained Devices},
author = {Adel Chehade and Edoardo Ragusa and Paolo Gastaldo and Rodolfo Zunino},
url = {https://arxiv.org/abs/2506.11319},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Chehade, Adel; Ragusa, Edoardo; Gastaldo, Paolo; Zunino, Rodolfo
Energy-Efficient Deep Learning for Traffic Classification on Microcontrollers Miscellaneous
2025.
@misc{chehade2025energyefficientdeeplearningtraffic,
title = {Energy-Efficient Deep Learning for Traffic Classification on Microcontrollers},
author = {Adel Chehade and Edoardo Ragusa and Paolo Gastaldo and Rodolfo Zunino},
url = {https://arxiv.org/abs/2506.10851},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}
Zhang, Xixi; Wang, Yu; Ohtsuki, Tomoaki; Gui, Guan; Yuen, Chau; Renzo, Marco Di; Sari, Hikmet
Malware Traffic Classification via Expandable Class Incremental Learning With Architecture Search Journal Article
In: IEEE Transactions on Information Forensics and Security, vol. 20, pp. 6074-6085, 2025.
@article{11030736,
title = {Malware Traffic Classification via Expandable Class Incremental Learning With Architecture Search},
author = {Xixi Zhang and Yu Wang and Tomoaki Ohtsuki and Guan Gui and Chau Yuen and Marco Di Renzo and Hikmet Sari},
url = {https://ieeexplore.ieee.org/abstract/document/11030736},
doi = {10.1109/TIFS.2025.3578937},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Information Forensics and Security},
volume = {20},
pages = {6074-6085},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Oluremi, David
AutoML-Driven Architecture Search for Functional Genomics Prediction Using Deep Neural Networks Journal Article
In: 2025.
@article{articlen,
title = {AutoML-Driven Architecture Search for Functional Genomics Prediction Using Deep Neural Networks},
author = {David Oluremi},
url = {https://www.researchgate.net/publication/392591816_AutoML-Driven_Architecture_Search_for_Functional_Genomics_Prediction_Using_Deep_Neural_Networks},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Wang, Xiaofei; Zhao, Yunfeng; Qiu, Chao; Gao, Fei; Zhao, Zebo; Yao, Haipeng; Li, Xiuhua
Energy-Friendly Federated Neural Architecture Search for Industrial Cyber-Physical Systems Journal Article
In: IEEE Journal on Selected Areas in Communications, pp. 1-1, 2025.
@article{11030259,
title = {Energy-Friendly Federated Neural Architecture Search for Industrial Cyber-Physical Systems},
author = {Xiaofei Wang and Yunfeng Zhao and Chao Qiu and Fei Gao and Zebo Zhao and Haipeng Yao and Xiuhua Li},
url = {https://ieeexplore.ieee.org/abstract/document/11030259},
doi = {10.1109/JSAC.2025.3574599},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Journal on Selected Areas in Communications},
pages = {1-1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Hou, Libin; Wang, Linyuan; Hou, Senbao; Liu, Tianyuan; Ma, Shuxiao; Chen, Jian; Yan, Bin
Low redundancy cell-based Neural Architecture Search for large convolutional neural networks Journal Article
In: Neurocomputing, vol. 649, pp. 130644, 2025, ISSN: 0925-2312.
@article{HOU2025130644,
title = {Low redundancy cell-based Neural Architecture Search for large convolutional neural networks},
author = {Libin Hou and Linyuan Wang and Senbao Hou and Tianyuan Liu and Shuxiao Ma and Jian Chen and Bin Yan},
url = {https://www.sciencedirect.com/science/article/pii/S0925231225013165},
doi = {https://doi.org/10.1016/j.neucom.2025.130644},
issn = {0925-2312},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Neurocomputing},
volume = {649},
pages = {130644},
abstract = {The cell-based search space is one of the main paradigms in Neural Architecture Search (NAS). However, the current research on this search space tends to optimize on small-size models, and the performance improvement of NAS might be stuck in a bottleneck. This situation has led to a growing performance gap between NAS and hand-designed models in recent years. In this paper, we focus on how to effectively expand the cell-based search space and proposes Low redundancy Cell-based Neural Architecture Search for Large Convolutional neural networks (LC2NAS), a gradient-based NAS method to search large-scale convolutional models with better performance based on low redundant cell search space. Specifically, a cell-based search space with low redundancy and large kernel is designed. Then train and sample a super network under computational constraints. Finally the network structure is optimized by gradient-based search. Experimental results show that the performance of the proposed search method is comparable to the popular hand-designed models in recent years at different scales. Moreover, LC-NASNet-B achieves an 83.7% classification accuracy on the ImageNet-1k dataset with 86.2M parameters, surpassing previous NAS methods and comparable to the most prominent hand-designed models.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Huang, Zitong; Montazerin, Mansooreh; Srivastava, Ajitesh
SWAT-NN: Simultaneous Weights and Architecture Training for Neural Networks in a Latent Space Technical Report
2025.
@techreport{huang2025swatnnsimultaneousweightsarchitecture,
title = {SWAT-NN: Simultaneous Weights and Architecture Training for Neural Networks in a Latent Space},
author = {Zitong Huang and Mansooreh Montazerin and Ajitesh Srivastava},
url = {https://arxiv.org/abs/2506.08270},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Kim, Jingeun; Yoon, Yourim
Pruning for efficient DenseNet via surrogate-model-assisted genetic algorithm considering neural architecture search proxies Journal Article
In: Swarm and Evolutionary Computation, vol. 97, pp. 101983, 2025, ISSN: 2210-6502.
@article{KIM2025101983,
title = {Pruning for efficient DenseNet via surrogate-model-assisted genetic algorithm considering neural architecture search proxies},
author = {Jingeun Kim and Yourim Yoon},
url = {https://www.sciencedirect.com/science/article/pii/S2210650225001415},
doi = {https://doi.org/10.1016/j.swevo.2025.101983},
issn = {2210-6502},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Swarm and Evolutionary Computation},
volume = {97},
pages = {101983},
abstract = {Recently, convolution neural networks have achieved remarkable progress in computer vision. These neural networks have a large number of parameters, which should be limited in resource-constrained environments. To address this problem, new pruning approaches have explored using neural architecture search (NAS) to determine optimal subnetworks. We propose a novel pruning framework using a surrogate model-assisted genetic algorithm considering NAS proxies (SMA-GA-NP). We applied multi-dimensional encoding and designed crossover and mutation methods. To reduce the search time of NAS, we leveraged a surrogate model to approximate the fitness value of individuals and used NAS proxies, such as reducing the number of epochs and the training set size. The DenseNet-BC (k = 12) model was used as the baseline. We achieved highly competitive performance on CIFAR-10 compared with other GA-based pruning methods and baselines. For CIFAR-100, we reduced the number of parameters by 11.25% to 18.75%, while achieving less than 1% performance degradation compared to the baseline model. These findings highlight SMA-GA-NP’s effectiveness in significantly reducing the number of parameters while having a negligible impact on the model’s performance. We also conducted an ablation study to explore the efficiency of the GA settings, the surrogate model, and NAS proxies in SMA-GA-NP and identified the current limitations and future potential of SMA-GA-NP.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhong, Zhiwei; Liu, Xianming; Jiang, Junjun; Zhao, Debin; Wang, Shiqi
Dual-Level Cross-Modality Neural Architecture Search for Guided Image Super-Resolution Journal Article
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-18, 2025.
@article{11029618,
title = {Dual-Level Cross-Modality Neural Architecture Search for Guided Image Super-Resolution},
author = {Zhiwei Zhong and Xianming Liu and Junjun Jiang and Debin Zhao and Shiqi Wang},
url = {https://ieeexplore.ieee.org/abstract/document/11029618},
doi = {10.1109/TPAMI.2025.3578468},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
pages = {1-18},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Jing, Haizhao; Zhang, Haokui; Shang, Zhenhao; Xiao, Rong; Wang, Peng; Zhang, Yanning
Language Embedding Meets Dynamic Graph: A New Exploration for Neural Architecture Representation Learning Technical Report
2025.
@techreport{jing2025languageembeddingmeetsdynamic,
title = {Language Embedding Meets Dynamic Graph: A New Exploration for Neural Architecture Representation Learning},
author = {Haizhao Jing and Haokui Zhang and Zhenhao Shang and Rong Xiao and Peng Wang and Yanning Zhang},
url = {https://arxiv.org/abs/2506.07735},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Lv, Zeqiong; Qian, Chao; Liu, Yun; Fan, Jiahao; Sun, Yanan
Runtime Analysis of Evolutionary NAS for Multiclass Classification Technical Report
2025.
@techreport{lv2025runtimeanalysisevolutionarynas,
title = {Runtime Analysis of Evolutionary NAS for Multiclass Classification},
author = {Zeqiong Lv and Chao Qian and Yun Liu and Jiahao Fan and Yanan Sun},
url = {https://arxiv.org/abs/2506.06019},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Martyniuk, Darya; Jung, Johannes; Barta, Daniel; Paschke, Adrian
Benchmarking Quantum Architecture Search with Surrogate Assistance Technical Report
2025.
@techreport{martyniuk2025benchmarkingquantumarchitecturesearch,
title = {Benchmarking Quantum Architecture Search with Surrogate Assistance},
author = {Darya Martyniuk and Johannes Jung and Daniel Barta and Adrian Paschke},
url = {https://arxiv.org/abs/2506.06762},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Gode, Chetan; Nanche, Bhushan Marutirao; Dhabliya, Dharmesh; Shelke, Rahul Dnyanoba; Patil, Rajendra V.; Bhosle, Sushma
Dynamic neural architecture search : A pathway to efficiently optimized deep learning models Journal Article
In: Journal of Information and Optimization Sciences, vol. 46, no. 4-A, pp. 1117–1127, 2025.
@article{doi:10.47974/JIOS-1896,
title = {Dynamic neural architecture search : A pathway to efficiently optimized deep learning models},
author = {Chetan Gode and Bhushan Marutirao Nanche and Dharmesh Dhabliya and Rahul Dnyanoba Shelke and Rajendra V. Patil and Sushma Bhosle},
url = {https://doi.org/10.47974/JIOS-1896},
doi = {10.47974/JIOS-1896},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Journal of Information and Optimization Sciences},
volume = {46},
number = {4-A},
pages = {1117–1127},
publisher = {Taru Publications},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Ji, Han; Feng, Yuqi; Fan, Jiahao; Sun, Yanan
Loss Functions for Predictor-based Neural Architecture Search Technical Report
2025.
@techreport{ji2025lossfunctionspredictorbasedneural,
title = {Loss Functions for Predictor-based Neural Architecture Search},
author = {Han Ji and Yuqi Feng and Jiahao Fan and Yanan Sun},
url = {https://arxiv.org/abs/2506.05869},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Kazadi, Joël
Exploring Optimal Neural Network Architectures: What benefits does Reinforcement Learning offer? Bachelor Thesis
2025.
@bachelorthesis{unknownc,
title = {Exploring Optimal Neural Network Architectures: What benefits does Reinforcement Learning offer?},
author = {Joël Kazadi},
url = {https://www.researchgate.net/publication/392493431_Exploring_Optimal_Neural_Network_Architectures_What_benefits_does_Reinforcement_Learning_offer},
doi = {10.13140/RG.2.2.17572.18564},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {bachelorthesis}
}
Yang, Yu; Wang, Siqi; Zhang, Gan; Wang, Qifu; Qin, Yao; Zhai, Dandan; Yang, Zhiqing; Li, Peng
GA-OMTL: Genetic algorithm optimization for multi-task neural architecture search in NIR spectroscopy Journal Article
In: Expert Systems with Applications, vol. 290, pp. 128517, 2025, ISSN: 0957-4174.
@article{YANG2025128517,
title = {GA-OMTL: Genetic algorithm optimization for multi-task neural architecture search in NIR spectroscopy},
author = {Yu Yang and Siqi Wang and Gan Zhang and Qifu Wang and Yao Qin and Dandan Zhai and Zhiqing Yang and Peng Li},
url = {https://www.sciencedirect.com/science/article/pii/S0957417425021360},
doi = {https://doi.org/10.1016/j.eswa.2025.128517},
issn = {0957-4174},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Expert Systems with Applications},
volume = {290},
pages = {128517},
abstract = {Near-infrared (NIR) spectroscopy analysis based on deep learning has emerged as a powerful tool for the quality assessment of food and agricultural products. However, most existing multi-task deep learning (MTL) network architectures rely on manual design, struggle to efficiently adapt to diverse datasets, and often neglect the interaction of task-specific (private) features across tasks. To address these challenges, this study proposes a genetic algorithm (GA)-optimized MTL model, termed GA-OMTL, which integrates the strengths of neural architecture search and GA for multi-task prediction of spectral data. The model enhances both feature extraction and task-specific feature interaction by incorporating searchable components such as residual modules (Resblock), batch normalization (BN) layers, Squeeze-and-Excitation (SE) modules, and feature interaction modules. The effectiveness of GA-OMTL was validated using two datasets: American ginseng and wheat flour. In the prediction of protopanaxatriol-type ginsenosides (PPT) and protopanaxadiol-type ginsenosides (PPD) in American ginseng, the R2, RMSE, and RPD values achieved by GA-OMTL were 0.93, 0.70 mg/g, and 3.83 (PPT), and 0.98, 2.03 mg/g, and 7.16 (PPD), respectively. For the prediction of protein and moisture content in wheat flour, the R2, RMSE, and RPD values were 0.99, 0.29 mg/g, and 8.22 (protein), and 0.97, 0.22 mg/g, and 5.67 (moisture), respectively. The experimental results demonstrate that GA-OMTL outperforms three comparison methods in prediction accuracy, highlighting its potential for complex spectral tasks and confirming the practicality and robustness of the proposed model.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Jha, Abhash Kumar; Moradian, Shakiba; Krishnakumar, Arjun; Rapp, Martin; Hutter, Frank
$textbackslashtextttconfopt$: A Library for Implementation and Evaluation of Gradient-based One-Shot NAS Methods Proceedings Article
In: AutoML 2025 ABCD Track, 2025.
@inproceedings{<LineBreak>jha2025textttconfopt,
title = {$textbackslashtextttconfopt$: A Library for Implementation and Evaluation of Gradient-based One-Shot NAS Methods},
author = {Abhash Kumar Jha and Shakiba Moradian and Arjun Krishnakumar and Martin Rapp and Frank Hutter},
url = {https://openreview.net/forum?id=serEYBjyhK},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {AutoML 2025 ABCD Track},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Zhao, Lingxiao; Zeng, Xianwen
BTSEG-Nas: a neural network architecture search-based multimodal MRI segmentation network for brain tumors Proceedings Article
In: Zhu, Peicheng; Lin, Guihua (Ed.): Fifth International Conference on Applied Mathematics, Modelling, and Intelligent Computing (CAMMIC 2025), pp. 136441X, International Society for Optics and Photonics SPIE, 2025.
@inproceedings{10.1117/12.3070322,
title = {BTSEG-Nas: a neural network architecture search-based multimodal MRI segmentation network for brain tumors},
author = {Lingxiao Zhao and Xianwen Zeng},
editor = {Peicheng Zhu and Guihua Lin},
url = {https://doi.org/10.1117/12.3070322},
doi = {10.1117/12.3070322},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Fifth International Conference on Applied Mathematics, Modelling, and Intelligent Computing (CAMMIC 2025)},
volume = {13644},
pages = {136441X},
publisher = {SPIE},
organization = {International Society for Optics and Photonics},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}