Maintained by Difan Deng and Marius Lindauer.
The following list considers papers related to neural architecture search. It is by no means complete. If you miss a paper on the list, please let us know.
Please note that although NAS methods steadily improve, the quality of empirical evaluations in this field are still lagging behind compared to other areas in machine learning, AI and optimization. We would therefore like to share some best practices for empirical evaluations of NAS methods, which we believe will facilitate sustained and measurable progress in the field. If you are interested in a teaser, please read our blog post or directly jump to our checklist.
Transformers have gained increasing popularity in different domains. For a comprehensive list of papers focusing on Neural Architecture Search for Transformer-Based spaces, the awesome-transformer-search repo is all you need.
2025
Mendis, Hashan Roshantha; Yen, Chih-Hsuan; Kang, Chih-Kai; Hsiu, Pi-Cheng
Intermittent-Friendly Neural Architecture Search: Demystifying Accuracy and Overhead Trade-Offs Journal Article
In: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, pp. 1-1, 2025.
@article{10944793,
title = {Intermittent-Friendly Neural Architecture Search: Demystifying Accuracy and Overhead Trade-Offs},
author = {Hashan Roshantha Mendis and Chih-Hsuan Yen and Chih-Kai Kang and Pi-Cheng Hsiu},
url = {https://ieeexplore.ieee.org/abstract/document/10944793},
doi = {10.1109/TCAD.2025.3555963},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems},
pages = {1-1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Hoang, Anh Tuan; Viharos, Zsolt János
Model Input-Output Configuration Search With Embedded Feature Selection for Sensor Time-Series and Image Classification Journal Article
In: IEEE Access, vol. 13, pp. 58960-58977, 2025.
@article{10943129,
title = {Model Input-Output Configuration Search With Embedded Feature Selection for Sensor Time-Series and Image Classification},
author = {Anh Tuan Hoang and Zsolt János Viharos},
url = {Model Input-Output Configuration Search With Embedded Feature Selection for Sensor Time-Series and Image Classification},
doi = {10.1109/ACCESS.2025.3555379},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Access},
volume = {13},
pages = {58960-58977},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Lin, Yuxin; Zhu, Chaoyang
Detection of Pedestrian Movement Poses in High-Speed Autonomous Driving Environments Using DVS Proceedings Article
In: Yang, Jie; Zheng, Yuanjie; Gong, Chen (Ed.): Pattern Analysis and Machine Intelligence, pp. 54–66, Springer Nature Singapore, Singapore, 2025, ISBN: 978-981-96-3349-4.
@inproceedings{10.1007/978-981-96-3349-4_8,
title = {Detection of Pedestrian Movement Poses in High-Speed Autonomous Driving Environments Using DVS},
author = {Yuxin Lin and Chaoyang Zhu},
editor = {Jie Yang and Yuanjie Zheng and Chen Gong},
url = {https://link.springer.com/chapter/10.1007/978-981-96-3349-4_8},
isbn = {978-981-96-3349-4},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Pattern Analysis and Machine Intelligence},
pages = {54–66},
publisher = {Springer Nature Singapore},
address = {Singapore},
abstract = {In the realm of autonomous driving, the detection and prediction of pedestrian movement poses at high speeds are crucial for enhancing vehicular safety. Traditional imaging technologies, while rich in detail, suffer from limitations such as low frame rates and shutter-induced latencies, which can impede the rapid detection necessary in high-speed environments. This paper introduces a novel algorithm that leverages the capabilities of Dynamic Vision Sensors (DVS) to detect pedestrian poses under high-speed conditions. Unlike conventional cameras, DVS operate on the principle of capturing changes in light intensity at each pixel, allowing for data generation with high temporal resolution and minimal latency. Our approach integrates this technology with a Neural Architecture Search (NAS) optimized version of the YOLO-NAS model, specifically adapted to process the unique event-based data produced by DVS. This integration not only enhances the detection capabilities but also significantly reduces the system's response time. Comparative experiments demonstrate that our DVS-based system achieves a mean Average Precision (mAP) of 85.4%. These results underscore the potential of event-based vision sensors in transforming pedestrian detection frameworks within autonomous driving systems, offering substantial improvements in both accuracy and speed.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Gupta, Vyom Kumar; Yadav, Abhishek; Kumar, Mirgender; Kumar, Binod; Sunny,
On-Chip Implementation of Neural Network-Based Classifier Models for E-Nose With Chemometric Analysis Journal Article
In: IEEE Transactions on Instrumentation and Measurement, vol. 74, pp. 1-8, 2025.
@article{10938683,
title = {On-Chip Implementation of Neural Network-Based Classifier Models for E-Nose With Chemometric Analysis},
author = {Vyom Kumar Gupta and Abhishek Yadav and Mirgender Kumar and Binod Kumar and Sunny},
url = {https://ieeexplore.ieee.org/abstract/document/10938683},
doi = {10.1109/TIM.2025.3554292},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Instrumentation and Measurement},
volume = {74},
pages = {1-8},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Huang, Junhao; Xue, Bing; Sun, Yanan; Zhang, Mengjie; Yen, Gary G.
Automated design of neural networks with multi-scale convolutions via multi-path weight sampling Journal Article
In: Pattern Recognition, vol. 165, pp. 111605, 2025, ISSN: 0031-3203.
@article{HUANG2025111605,
title = {Automated design of neural networks with multi-scale convolutions via multi-path weight sampling},
author = {Junhao Huang and Bing Xue and Yanan Sun and Mengjie Zhang and Gary G. Yen},
url = {https://www.sciencedirect.com/science/article/pii/S0031320325002651},
doi = {https://doi.org/10.1016/j.patcog.2025.111605},
issn = {0031-3203},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Pattern Recognition},
volume = {165},
pages = {111605},
abstract = {The performance of convolutional neural networks (CNNs) relies heavily on the architecture design. Recently, an increasingly prevalent trend in CNN architecture design is the utilization of ingeniously crafted building blocks, e.g., the MixConv module, for improving the model expressivity and efficiency. To leverage the feature learning capability of multi-scale convolution while further reducing its computational complexity, this paper presents a computationally efficient yet powerful module, dubbed EMixConv, by combining parameter-free concatenation-based feature reuse with multi-scale convolution. In addition, we propose a one-shot neural architecture search (NAS) method integrating the EMixConv module to automatically search for the optimal combination of the related architectural parameters. Furthermore, an efficient multi-path weight sampling mechanism is developed to enhance the robustness of weight inheritance in the supernet. We demonstrate the effectiveness of the proposed module and the NAS algorithm on three popular image classification tasks. The developed models, dubbed EMixNets, outperform most state-of-the-art architectures with fewer parameters and computations on the CIFAR datasets. On ImageNet, EMixNet is superior to a majority of compared methods and is also more compact and computationally efficient.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Chen, Oscal Tzyh-Chiang; Chang, Yu-Xuan; Chung, Chih-Yu; Cheng, Ya-Yun; HA, Manh-Hung
Hardware-Aware Iterative One-Shot Neural Architecture Search With Adaptable Knowledge Distillation for Efficient Edge Computing Journal Article
In: IEEE Access, vol. 13, pp. 54204-54222, 2025.
@article{10938148,
title = {Hardware-Aware Iterative One-Shot Neural Architecture Search With Adaptable Knowledge Distillation for Efficient Edge Computing},
author = {Oscal Tzyh-Chiang Chen and Yu-Xuan Chang and Chih-Yu Chung and Ya-Yun Cheng and Manh-Hung HA},
url = {https://ieeexplore.ieee.org/document/10938148},
doi = {10.1109/ACCESS.2025.3554185},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Access},
volume = {13},
pages = {54204-54222},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Tong, Lyuyang; Liu, Jie; Du, Bo
SceneFormer: Neural Architecture Search of Transformers for Remote Sensing Scene Classification Journal Article
In: IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1-15, 2025.
@article{10942436,
title = {SceneFormer: Neural Architecture Search of Transformers for Remote Sensing Scene Classification},
author = {Lyuyang Tong and Jie Liu and Bo Du},
url = {https://ieeexplore.ieee.org/abstract/document/10942436},
doi = {10.1109/TGRS.2025.3555207},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Geoscience and Remote Sensing},
volume = {63},
pages = {1-15},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhu, Lixian; Wang, Su; Jin, Xiaokun; Zheng, Kai; Zhang, Jian; Sun, Shuting; Tian, Fuze; Cai, Ran; Hu, Bin
MDH-NAS: Accelerating EEG Signal Classification with Mixed-Level Differentiable and Hardware-Aware Neural Architecture Search Journal Article
In: IEEE Internet of Things Journal, pp. 1-1, 2025.
@article{10938080,
title = {MDH-NAS: Accelerating EEG Signal Classification with Mixed-Level Differentiable and Hardware-Aware Neural Architecture Search},
author = {Lixian Zhu and Su Wang and Xiaokun Jin and Kai Zheng and Jian Zhang and Shuting Sun and Fuze Tian and Ran Cai and Bin Hu},
url = {https://ieeexplore.ieee.org/abstract/document/10938080},
doi = {10.1109/JIOT.2025.3553450},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Internet of Things Journal},
pages = {1-1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Alotaibi, Abrar; Ahmed, Moataz
Neural Architecture Search for Generative Adversarial Networks: A Comprehensive Review and Critical Analysis Journal Article
In: Applied Sciences, vol. 15, no. 7, 2025, ISSN: 2076-3417.
@article{app15073623,
title = {Neural Architecture Search for Generative Adversarial Networks: A Comprehensive Review and Critical Analysis},
author = {Abrar Alotaibi and Moataz Ahmed},
url = {https://www.mdpi.com/2076-3417/15/7/3623},
doi = {10.3390/app15073623},
issn = {2076-3417},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Applied Sciences},
volume = {15},
number = {7},
abstract = {Neural Architecture Search (NAS) has emerged as a pivotal technique in optimizing the design of Generative Adversarial Networks (GANs), automating the search for effective architectures while addressing the challenges inherent in manual design. This paper provides a comprehensive review of NAS methods applied to GANs, categorizing and comparing various approaches based on criteria such as search strategies, evaluation metrics, and performance outcomes. The review highlights the benefits of NAS in improving GAN performance, stability, and efficiency, while also identifying limitations and areas for future research. Key findings include the superiority of evolutionary algorithms and gradient-based methods in certain contexts, the importance of robust evaluation metrics beyond traditional scores like Inception Score (IS) and Fréchet Inception Distance (FID), and the need for diverse datasets in assessing GAN performance. By presenting a structured comparison of existing NAS-GAN techniques, this paper aims to guide researchers in developing more effective NAS methods and advancing the field of GANs.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Chen, Zhen-Song; Ding, Hong-Wei; Wang, Xian-Jia; Pedrycz, Witold
ZeroLM: Data-Free Transformer Architecture Search for Language Models Technical Report
2025.
@techreport{chen2025zerolmdatafreetransformerarchitecture,
title = {ZeroLM: Data-Free Transformer Architecture Search for Language Models},
author = {Zhen-Song Chen and Hong-Wei Ding and Xian-Jia Wang and Witold Pedrycz},
url = {https://arxiv.org/abs/2503.18646},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Xue, Xin; Zhou, Haoyi; Chen, Tianyu; Zhang, Shuai; Long, Yizhou; Li, Jianxin
Instructing the Architecture Search for Spatial-temporal Sequence Forecasting with LLM Technical Report
2025.
@techreport{xue2025instructingarchitecturesearchspatialtemporal,
title = {Instructing the Architecture Search for Spatial-temporal Sequence Forecasting with LLM},
author = {Xin Xue and Haoyi Zhou and Tianyu Chen and Shuai Zhang and Yizhou Long and Jianxin Li},
url = {https://arxiv.org/abs/2503.17994},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Li, Kefan; Wan, Yuting; Ma, Ailong; Zhong, Yanfei
A Lightweight Multi-Scale and Multi-Attention Hyperspectral Image Classification Network Based on Multi-Stage Search Journal Article
In: IEEE Transactions on Geoscience and Remote Sensing, pp. 1-1, 2025.
@article{10935661,
title = {A Lightweight Multi-Scale and Multi-Attention Hyperspectral Image Classification Network Based on Multi-Stage Search},
author = {Kefan Li and Yuting Wan and Ailong Ma and Yanfei Zhong},
url = {https://ieeexplore.ieee.org/abstract/document/10935661},
doi = {10.1109/TGRS.2025.3553147},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Geoscience and Remote Sensing},
pages = {1-1},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Xue, Yu; Hu, Bohan; Neri, Ferrante
A Surrogate Model With Multiple Comparisons and Semi-Online Learning for Evolutionary Neural Architecture Search Journal Article
In: IEEE Transactions on Emerging Topics in Computational Intelligence, pp. 1-13, 2025.
@article{10935345,
title = {A Surrogate Model With Multiple Comparisons and Semi-Online Learning for Evolutionary Neural Architecture Search},
author = {Yu Xue and Bohan Hu and Ferrante Neri},
url = {https://ieeexplore.ieee.org/abstract/document/10935345},
doi = {10.1109/TETCI.2025.3547621},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Emerging Topics in Computational Intelligence},
pages = {1-13},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Wang, Mansheng; Gu, Yu; Yang, Lidong; Zhang, Baohua; Wang, Jing; Lu, Xiaoqi; Li, Jianjun; Liu, Xin; Zhao, Ying; Yu, Dahua; Tang, Siyuan; He, Qun
A novel high-precision bilevel optimization method for 3D pulmonary nodule classification Journal Article
In: Physica Medica, vol. 133, pp. 104954, 2025, ISSN: 1120-1797.
@article{WANG2025104954,
title = {A novel high-precision bilevel optimization method for 3D pulmonary nodule classification},
author = {Mansheng Wang and Yu Gu and Lidong Yang and Baohua Zhang and Jing Wang and Xiaoqi Lu and Jianjun Li and Xin Liu and Ying Zhao and Dahua Yu and Siyuan Tang and Qun He},
url = {https://www.sciencedirect.com/science/article/pii/S112017972500064X},
doi = {https://doi.org/10.1016/j.ejmp.2025.104954},
issn = {1120-1797},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Physica Medica},
volume = {133},
pages = {104954},
abstract = {Background and objective
Classification of pulmonary nodules is important for the early diagnosis of lung cancer; however, the manual design of classification models requires substantial expert effort. To automate the model design process, we propose a neural architecture search with high-precision bilevel optimization (NAS-HBO) that directly searches for the optimal network on three-dimensional (3D) images.
Methods
We propose a novel high-precision bilevel optimization method (HBOM) to search for an optimal 3D pulmonary nodule classification model. We employed memory optimization techniques with a partially decoupled operation-weighting method to reduce the memory overhead while maintaining path selection stability. Additionally, we introduce a novel maintaining receptive field criterion (MRFC) within the NAS-HBO framework. MRFC narrows the search space by selecting and expanding the 3D Mobile Inverted Residual Bottleneck Block (3D-MBconv) operation based on previous receptive fields, thereby enhancing the scalability and practical application capabilities of NAS-HBO in terms of model complexity and performance.
Results
In this study, 888 CT images, including 554 benign and 450 malignant nodules, were obtained from the LIDC-IDRI dataset. The results showed that NAS-HBO achieved an impressive accuracy of 91.51 % after less than 6 h of searching, utilizing a mere 12.79 M parameters.
Conclusion
The proposed NAS-HBO method effectively automates the design of 3D pulmonary nodule classification models, achieving impressive accuracy with efficient parameters. By incorporating the HBOM and MRFC techniques, we demonstrated enhanced accuracy and scalability in model optimization for early lung cancer diagnosis. The related codes and results have been released at https://github.com/GuYuIMUST/NAS-HBO.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Classification of pulmonary nodules is important for the early diagnosis of lung cancer; however, the manual design of classification models requires substantial expert effort. To automate the model design process, we propose a neural architecture search with high-precision bilevel optimization (NAS-HBO) that directly searches for the optimal network on three-dimensional (3D) images.
Methods
We propose a novel high-precision bilevel optimization method (HBOM) to search for an optimal 3D pulmonary nodule classification model. We employed memory optimization techniques with a partially decoupled operation-weighting method to reduce the memory overhead while maintaining path selection stability. Additionally, we introduce a novel maintaining receptive field criterion (MRFC) within the NAS-HBO framework. MRFC narrows the search space by selecting and expanding the 3D Mobile Inverted Residual Bottleneck Block (3D-MBconv) operation based on previous receptive fields, thereby enhancing the scalability and practical application capabilities of NAS-HBO in terms of model complexity and performance.
Results
In this study, 888 CT images, including 554 benign and 450 malignant nodules, were obtained from the LIDC-IDRI dataset. The results showed that NAS-HBO achieved an impressive accuracy of 91.51 % after less than 6 h of searching, utilizing a mere 12.79 M parameters.
Conclusion
The proposed NAS-HBO method effectively automates the design of 3D pulmonary nodule classification models, achieving impressive accuracy with efficient parameters. By incorporating the HBOM and MRFC techniques, we demonstrated enhanced accuracy and scalability in model optimization for early lung cancer diagnosis. The related codes and results have been released at https://github.com/GuYuIMUST/NAS-HBO.
Winter, Benjamin David; Teahan, William John
Evaluating a Novel Neuroevolution and Neural Architecture Search System Technical Report
2025.
@techreport{winter2025evaluatingnovelneuroevolutionneural,
title = {Evaluating a Novel Neuroevolution and Neural Architecture Search System},
author = {Benjamin David Winter and William John Teahan},
url = {https://arxiv.org/abs/2503.10869},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Winter, Benjamin David; Teahan, William J.
Ecological Neural Architecture Search Technical Report
2025.
@techreport{winter2025ecologicalneuralarchitecturesearch,
title = {Ecological Neural Architecture Search},
author = {Benjamin David Winter and William J. Teahan},
url = {https://arxiv.org/abs/2503.10908},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Jeon, Jeimin; Oh, Youngmin; Lee, Junghyup; Baek, Donghyeon; Kim, Dohyung; Eom, Chanho; Ham, Bumsub
Subnet-Aware Dynamic Supernet Training for Neural Architecture Search Technical Report
2025.
@techreport{jeon2025subnetawaredynamicsupernettraining,
title = {Subnet-Aware Dynamic Supernet Training for Neural Architecture Search},
author = {Jeimin Jeon and Youngmin Oh and Junghyup Lee and Donghyeon Baek and Dohyung Kim and Chanho Eom and Bumsub Ham},
url = {https://arxiv.org/abs/2503.10740},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Onzo, Bernard-marie; Xue, Yu; Neri, Ferrante
Surrogate-assisted evolutionary neural architecture search based on smart-block discovery Journal Article
In: Expert Systems with Applications, vol. 277, pp. 127237, 2025, ISSN: 0957-4174.
@article{ONZO2025127237,
title = {Surrogate-assisted evolutionary neural architecture search based on smart-block discovery},
author = {Bernard-marie Onzo and Yu Xue and Ferrante Neri},
url = {https://www.sciencedirect.com/science/article/pii/S0957417425008590},
doi = {https://doi.org/10.1016/j.eswa.2025.127237},
issn = {0957-4174},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Expert Systems with Applications},
volume = {277},
pages = {127237},
abstract = {Neural architecture search (NAS) has emerged as a powerful method for automating neural network design, yet its high computational cost remains a significant challenge. This paper introduces hybrid training-less neural architecture search (HYTES-NAS), a novel hybrid NAS framework that integrates evolutionary computation with a training-free evaluation strategy, significantly reducing computational demands while maintaining high search efficiency. Unlike conventional NAS methods that rely on full model training, HYTES-NAS leverages a surrogate-assisted scoring mechanism to assess candidate architectures efficiently. Additionally, a smart-block discovery strategy and particle swarm optimisation are employed to refine the search space and accelerate convergence. Experimental results on multiple NAS benchmarks demonstrate that HYTES-NAS achieves superior performance with significantly lower computational cost compared to state-of-the-art NAS methods. This work provides a promising and scalable solution for efficient NAS, making high-performance architecture search more accessible for real-world applications.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Xie, Xiaofeng; Gao, Yuelin; Zhang, Yuming
An improved Artificial Protozoa Optimizer for CNN architecture optimization Journal Article
In: Neural Networks, pp. 107368, 2025, ISSN: 0893-6080.
@article{XIE2025107368,
title = {An improved Artificial Protozoa Optimizer for CNN architecture optimization},
author = {Xiaofeng Xie and Yuelin Gao and Yuming Zhang},
url = {https://www.sciencedirect.com/science/article/pii/S0893608025002473},
doi = {https://doi.org/10.1016/j.neunet.2025.107368},
issn = {0893-6080},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Neural Networks},
pages = {107368},
abstract = {In this paper, we propose a novel neural architecture search (NAS) method called MAPOCNN, which leverages an enhanced version of the Artificial Protozoa Optimizer (APO) to optimize the architecture of Convolutional Neural Networks (CNNs). The APO is known for its rapid convergence, high stability, and minimal parameter involvement. To further improve its performance, we introduce MAPO (Modified Artificial Protozoa Optimizer), which incorporates the phototaxis behavior of protozoa. This addition helps mitigate the risk of premature convergence, allowing the algorithm to explore a broader range of possible CNN architectures and ultimately identify more optimal solutions. Through rigorous experimentation on benchmark datasets, including Rectangle and Mnist-random, we demonstrate that MAPOCNN not only achieves faster convergence times but also performs competitively when compared to other state-of-the-art NAS algorithms. The results highlight the effectiveness of MAPOCNN in efficiently discovering CNN architectures that outperform existing methods in terms of both speed and accuracy. This work presents a promising direction for optimizing deep learning architectures using biologically inspired optimization techniques.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhao, Tianchen; Wang, Xianpeng; Song, Xiangman
Multiobjective Backbone Network Architecture Search Based on Transfer Learning in Steel Defect Detection Journal Article
In: Neurocomputing, pp. 130012, 2025, ISSN: 0925-2312.
@article{ZHAO2025130012,
title = {Multiobjective Backbone Network Architecture Search Based on Transfer Learning in Steel Defect Detection},
author = {Tianchen Zhao and Xianpeng Wang and Xiangman Song},
url = {https://www.sciencedirect.com/science/article/pii/S0925231225006848},
doi = {https://doi.org/10.1016/j.neucom.2025.130012},
issn = {0925-2312},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Neurocomputing},
pages = {130012},
abstract = {In recent years, steel defect detection methods based on deep learning have been widely used. However, due to the shape specificity of steel defects and data scarcity, using existing convolutional neural network architectures for training requires significant expertise and time to fine-tune the hyperparameters. Transfer learning effectively tackles the challenges of data scarcity or limited computing resources by transferring domain knowledge from source tasks to related target tasks, reducing the resource consumption of model training from scratch. In this paper, we propose a transfer learning-based multiobjective backbone network architecture search method (TMBNAS). First, TMBNAS formulates defect detection network search as a multiobjective problem while optimizing its detection accuracy and model complexity. Second, an effective variable-length encoding strategy is designed to represent different building blocks and unpredictable optimal depths in convolutional neural networks, and targeted improvements are made to the crossover and mutation operators. For the specificity of the steel defect detection task, a transfer learning strategy based on similar knowledge is used to transfer the architecture and weight parameters obtained from the search in the source task to the target task, and adjust and optimize them. Finally, a dynamic adjustment mechanism based on actual constraints is designed during the search process to gradually approximate the optimal non-dominated solution set with higher detection accuracy without losing its population diversity. The proposed method is tested on the continuous casting slab and workpiece defect datasets. The experimental results show that the model searched by the proposed method can achieve better detection performance compared with manually designed deep learning algorithms and classical network architecture search methods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Gambella, Matteo; Pittorino, Fabrizio; Roveri, Manuel
Architecture-Aware Minimization (A$^2$M): How to Find Flat Minima in Neural Architecture Search Technical Report
2025.
@techreport{gambella2025architectureawareminimizationa2mflat,
title = {Architecture-Aware Minimization (A$^2$M): How to Find Flat Minima in Neural Architecture Search},
author = {Matteo Gambella and Fabrizio Pittorino and Manuel Roveri},
url = {https://arxiv.org/abs/2503.10404},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Tan, Wanrong; Huang, Lingling; Li, Hong; Tan, Menghao; Xie, Jin; Gao, Weifeng
Neural architecture search with integrated template-modules for efficient defect detection Journal Article
In: Expert Systems with Applications, pp. 127211, 2025, ISSN: 0957-4174.
@article{TAN2025127211,
title = {Neural architecture search with integrated template-modules for efficient defect detection},
author = {Wanrong Tan and Lingling Huang and Hong Li and Menghao Tan and Jin Xie and Weifeng Gao},
url = {https://www.sciencedirect.com/science/article/pii/S0957417425008334},
doi = {https://doi.org/10.1016/j.eswa.2025.127211},
issn = {0957-4174},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Expert Systems with Applications},
pages = {127211},
abstract = {Surface defect detection in industrial production is critical for quality control. Traditional manual design of detection models is time-consuming, inefficient, and lacks adaptability to diverse defect scenarios. To address these limitations, we propose TMNAS (Template-Module Neural Architecture Search), a bi-level optimization framework that automates the design of high-performance defect detection models. TMNAS uniquely integrates predefined template-modules into a flexible search space, enabling simultaneous exploration of architectural components and parameters. By incorporating a single-objective genetic algorithm with a computational complexity penalty term, our approach effectively avoids local optima and significantly reduces search resource consumption. Extensive experiments on industrial defect datasets demonstrate that TMNAS surpasses state-of-the-art models, while on the COCO benchmark, it achieves a competitive mean average precision (mAP) of 58.4%, all with lower computational overhead.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Sun, Jingyu; Zhang, Hanting; Wang, Jianfeng
Enhancing Time Series Prediction with Evolutionary Algorithm-based Optimization of LSTM Proceedings Article
In: ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5, 2025.
@inproceedings{10889678,
title = {Enhancing Time Series Prediction with Evolutionary Algorithm-based Optimization of LSTM},
author = {Jingyu Sun and Hanting Zhang and Jianfeng Wang},
url = {https://ieeexplore.ieee.org/abstract/document/10889678},
doi = {10.1109/ICASSP49660.2025.10889678},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages = {1-5},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
R., José Ribamar Durand; Junior, Geraldo Braz; Silva, Italo Francyles Santos; Oliveira, Rui Miguel Gil Costa
HistAttentionNAS: A CNN built via NAS for Penile Cancer Diagnosis using Histopathological Images Journal Article
In: Procedia Computer Science, vol. 256, pp. 764-771, 2025, ISSN: 1877-0509, (CENTERIS - International Conference on ENTERprise Information Systems / ProjMAN - International Conference on Project MANagement / HCist - International Conference on Health and Social Care Information Systems and Technologies).
@article{DURANDR2025764,
title = {HistAttentionNAS: A CNN built via NAS for Penile Cancer Diagnosis using Histopathological Images},
author = {José Ribamar Durand R. and Geraldo Braz Junior and Italo Francyles Santos Silva and Rui Miguel Gil Costa Oliveira},
url = {https://www.sciencedirect.com/science/article/pii/S1877050925005344},
doi = {https://doi.org/10.1016/j.procs.2025.02.177},
issn = {1877-0509},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Procedia Computer Science},
volume = {256},
pages = {764-771},
abstract = {Penile cancer, although rare, has an increasing mortality rate in Brazil, highlighting the need for effective diagnostic methods. Artificial Intelligence (AI) in histopathological analysis can speed up and objectify diagnosis, but designing an ideal architecture is challenging. In this study, we propose a neural architecture search (NAS) methodology for detecting penile cancer in digital histopathology images. We explored different configurations of stem blocks and the inclusion of attention mechanisms, highlighting specific preferences depending on the magnification of the images. The results showed that the NAS methodology enabled the discovery of more accurate and optimized architectures for this task, surpassing conventional models. The proposed models achieve 89.5% and 88.5% F1-Score for 40X and 100X magnification, respectively.},
note = {CENTERIS - International Conference on ENTERprise Information Systems / ProjMAN - International Conference on Project MANagement / HCist - International Conference on Health and Social Care Information Systems and Technologies},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Yunlong; Chen, Nan; Wang, Yonghe; Su, Xiangdong; Bao, Feilong
Multilingual Parameter-Sharing Adapters: A Method for Optimizing Low-Resource Neural Machine Translation Proceedings Article
In: ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5, 2025.
@inproceedings{10889761,
title = {Multilingual Parameter-Sharing Adapters: A Method for Optimizing Low-Resource Neural Machine Translation},
author = {Yunlong Zhang and Nan Chen and Yonghe Wang and Xiangdong Su and Feilong Bao},
url = {https://ieeexplore.ieee.org/abstract/document/10889761},
doi = {10.1109/ICASSP49660.2025.10889761},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages = {1-5},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Xie, Lunchen; Lomurno, Eugenio; Gambella, Matteo; Ardagna, Danilo; Roveri, Manual; Matteucci, Matteo; Shi, Qingjiang
ZO-DARTS++: An Efficient and Size-Variable Zeroth-Order Neural Architecture Search Algorithm Technical Report
2025.
@techreport{xie2025zodartsefficientsizevariablezerothorder,
title = {ZO-DARTS++: An Efficient and Size-Variable Zeroth-Order Neural Architecture Search Algorithm},
author = {Lunchen Xie and Eugenio Lomurno and Matteo Gambella and Danilo Ardagna and Manual Roveri and Matteo Matteucci and Qingjiang Shi},
url = {https://arxiv.org/abs/2503.06092},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Zhang, Heng; Chen, Ziqian; Xia, Wei; Xiong, Gang; Gou, Gaopeng; Li, Zhen; Huang, Guangyan; Li, Yunpeng
ANASETC: Automatic Neural Architecture Search for Encrypted Traffic Classification Proceedings Article
In: ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5, 2025.
@inproceedings{10890501,
title = {ANASETC: Automatic Neural Architecture Search for Encrypted Traffic Classification},
author = {Heng Zhang and Ziqian Chen and Wei Xia and Gang Xiong and Gaopeng Gou and Zhen Li and Guangyan Huang and Yunpeng Li},
url = {https://ieeexplore.ieee.org/document/10890501},
doi = {10.1109/ICASSP49660.2025.10890501},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages = {1-5},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Zein, Abbas Kassem; Diab, Rand Abou; Yaacoub, Mohamad; Ibrahim, Ali
Neural Architecture Search for Optimized TinyML Applications Proceedings Article
In: Roch, Massimo Ruo; Bellotti, Francesco; Berta, Riccardo; Martina, Maurizio; Ros, Paolo Motto (Ed.): Applications in Electronics Pervading Industry, Environment and Society, pp. 481–488, Springer Nature Switzerland, Cham, 2025, ISBN: 978-3-031-84100-2.
@inproceedings{10.1007/978-3-031-84100-2_57,
title = {Neural Architecture Search for Optimized TinyML Applications},
author = {Abbas Kassem Zein and Rand Abou Diab and Mohamad Yaacoub and Ali Ibrahim},
editor = {Massimo Ruo Roch and Francesco Bellotti and Riccardo Berta and Maurizio Martina and Paolo Motto Ros},
url = {https://link.springer.com/chapter/10.1007/978-3-031-84100-2_57},
isbn = {978-3-031-84100-2},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Applications in Electronics Pervading Industry, Environment and Society},
pages = {481–488},
publisher = {Springer Nature Switzerland},
address = {Cham},
abstract = {Integrating machine learning algorithms on circuits with low power consumption and low hardware complexity is challenging at the different levels of the design process. In network design at the software level, it is crucial to balance a high classification accuracy, while minimizing model complexity to reduce hardware demands. This paper proposes a search approach integrated with the Neural Architecture Search (NAS) to enhance the performance and reduce the complexity of deep learning models. Accuracy and number of Floating-Point Operations Per Second (FLOPS) are employed as evaluation metrics for the targeted models. The experimental results demonstrate that the proposed method outperforms similar state-of-the-art architectures while exhibiting comparable accuracy with up to a 70% reduction in complexity.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Capello, Alessio; Berta, Riccardo; Fresta, Matteo; Lazzaroni, Luca; Bellotti, Francesco
Leveraging Neural Architecture Search for Structural Health Monitoring on Resource-Constrained Devices Proceedings Article
In: Roch, Massimo Ruo; Bellotti, Francesco; Berta, Riccardo; Martina, Maurizio; Ros, Paolo Motto (Ed.): Applications in Electronics Pervading Industry, Environment and Society, pp. 323–330, Springer Nature Switzerland, Cham, 2025, ISBN: 978-3-031-84100-2.
@inproceedings{10.1007/978-3-031-84100-2_38,
title = {Leveraging Neural Architecture Search for Structural Health Monitoring on Resource-Constrained Devices},
author = {Alessio Capello and Riccardo Berta and Matteo Fresta and Luca Lazzaroni and Francesco Bellotti},
editor = {Massimo Ruo Roch and Francesco Bellotti and Riccardo Berta and Maurizio Martina and Paolo Motto Ros},
url = {https://link.springer.com/chapter/10.1007/978-3-031-84100-2_38},
isbn = {978-3-031-84100-2},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Applications in Electronics Pervading Industry, Environment and Society},
pages = {323–330},
publisher = {Springer Nature Switzerland},
address = {Cham},
abstract = {In recent decades signal processing incorporated the capabilities offered by Deep Learning (DL) models, especially for complex tasks. DL models demand significant memory, power, and computational resources, posing challenges for Microcontroller Units (MCUs) with limited capacities. The possibility to run models directly on the edge device is key in connectivity-limited scenarios such as Structural Health Monitoring (SHM). For those scenarios, it is necessary to use Tiny Machine Learning techniques to reduces computational requirements. This study focuses on the impact of the extended version of the state-of-the-art Neural Architecture Search (NAS) tool, μNAS, for SHM applications, targeting four commonly used MCUs. Our assessment is based on the Z24 Bridge benchmark dataset, a common dataset for SHM we employed to train and evaluate models. We then discuss if the models found fit the constraints of the MCUs and the possible tradeoffs between error rate and model computational requirements. We also offer a comparison with the Raspberry Pi 4 Model B to highlight μNAS's capability in achieving high accuracy with higher computing capabilities. The obtained results are promising, as the found models satisfy the given constraints both in term of accuracy and memory footprint.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Miriyala, Srinivas; Vajrala, Sowmya; Kumar, Hitesh; Kodavanti, Sravanth; Rajendiran, Vikram
Mobile-friendly Image de-noising: Hardware Conscious Optimization for Edge Application Proceedings Article
In: ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5, 2025.
@inproceedings{10888855,
title = {Mobile-friendly Image de-noising: Hardware Conscious Optimization for Edge Application},
author = {Srinivas Miriyala and Sowmya Vajrala and Hitesh Kumar and Sravanth Kodavanti and Vikram Rajendiran},
url = {https://ieeexplore.ieee.org/document/10888855},
doi = {10.1109/ICASSP49660.2025.10888855},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages = {1-5},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Li, Xin; Fu, Keren; Zhao, Qijun
Camouflaged Object Detection via Neural Architecture Search Proceedings Article
In: ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5, 2025.
@inproceedings{10887976,
title = {Camouflaged Object Detection via Neural Architecture Search},
author = {Xin Li and Keren Fu and Qijun Zhao},
url = {https://ieeexplore.ieee.org/abstract/document/10887976},
doi = {10.1109/ICASSP49660.2025.10887976},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages = {1-5},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Zein, Hadi Al; Waterlaat, Nick; Alkanat, Tunc
Neural Architecture Search for Ultra-low Memory Blood Glucose Forecasting on the Edge Proceedings Article
In: ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5, 2025.
@inproceedings{10890864,
title = {Neural Architecture Search for Ultra-low Memory Blood Glucose Forecasting on the Edge},
author = {Hadi Al Zein and Nick Waterlaat and Tunc Alkanat},
url = {https://ieeexplore.ieee.org/document/10890864},
doi = {10.1109/ICASSP49660.2025.10890864},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages = {1-5},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Liu, Wenbo; Deng, Tao; Yan, Fei
HID-NAS: A Novel Neural Architecture Search Pipeline for High Information Density Data Proceedings Article
In: ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5, 2025.
@inproceedings{10889095,
title = {HID-NAS: A Novel Neural Architecture Search Pipeline for High Information Density Data},
author = {Wenbo Liu and Tao Deng and Fei Yan},
url = {https://ieeexplore.ieee.org/abstract/document/10889095},
doi = {10.1109/ICASSP49660.2025.10889095},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages = {1-5},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Zhang, Binyan; Ren, Ao; Zhang, Zihao; Duan, Moming; Liu, Duo; Tan, Yujuan; Zhong, Kan
MPNAS: Multimodal Sentiment Analysis Pruning via Neural Architecture Search Proceedings Article
In: ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5, 2025.
@inproceedings{10887670,
title = {MPNAS: Multimodal Sentiment Analysis Pruning via Neural Architecture Search},
author = {Binyan Zhang and Ao Ren and Zihao Zhang and Moming Duan and Duo Liu and Yujuan Tan and Kan Zhong},
url = {https://ieeexplore.ieee.org/abstract/document/10887670},
doi = {10.1109/ICASSP49660.2025.10887670},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages = {1-5},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Sun, Shuoyang; Zhang, Kaiwen; Fang, Hao; Chen, Bin; Li, Jiawei; Huo, Enze; Xia, Shu-Tao
RobNAS: Robust Neural Architecture Search for Point Cloud Adversarial Defense Proceedings Article
In: ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5, 2025.
@inproceedings{10890087,
title = {RobNAS: Robust Neural Architecture Search for Point Cloud Adversarial Defense},
author = {Shuoyang Sun and Kaiwen Zhang and Hao Fang and Bin Chen and Jiawei Li and Enze Huo and Shu-Tao Xia},
url = {https://ieeexplore.ieee.org/abstract/document/10890087},
doi = {10.1109/ICASSP49660.2025.10890087},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages = {1-5},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Chen, Renqi; Nian, Fan; Cen, Yuhui; Peng, Yiheng; Wang, Hongbo; Yu, Zekuan; Luo, Jingjing
L-SSHNN: A Larger search space of Semi-Supervised Hybrid NAS Network for echocardiography segmentation Journal Article
In: Expert Systems with Applications, pp. 127084, 2025, ISSN: 0957-4174.
@article{CHEN2025127084,
title = {L-SSHNN: A Larger search space of Semi-Supervised Hybrid NAS Network for echocardiography segmentation},
author = {Renqi Chen and Fan Nian and Yuhui Cen and Yiheng Peng and Hongbo Wang and Zekuan Yu and Jingjing Luo},
url = {https://www.sciencedirect.com/science/article/pii/S0957417425007067},
doi = {https://doi.org/10.1016/j.eswa.2025.127084},
issn = {0957-4174},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Expert Systems with Applications},
pages = {127084},
abstract = {Echocardiography with image segmentation facilitates clinicians in thoroughly analyzing cardiac conditions by providing detailed insights into anatomical structures. However, echocardiography segmentation is challenging due to low image quality with blurred boundaries constrained by the inherent noise, artifacts, and cardiac motion. When manually designed networks have achieved promising results, Neural Architecture Search (NAS) allows for the automatic optimization of network structures. Integrating the strengths of NAS works and meticulously crafted networks becomes meaningful in advancing this field. In this paper, we propose a new Semi-Supervised Hybrid NAS Network with a Larger search space for echocardiography segmentation under limited annotations, termed L-SSHNN. Firstly, we propose a three-level search: inner cell, outer layer, and encoder–decoder design, enlarging the search space. Secondly, the proposed L-SSHNN specifies an architectural non-sharing strategy, allowing diverse structures among different cells. Moreover, we propose a new differentiable architecture search (Darts) method termed separation-combination partially-connected Darts (SC-PC-Darts) to incorporate convolution fusion modules and search for the optimal cell architecture for multi-scale feature extraction. Extensive experiments with other state-of-the-art methods on three publicly available echocardiography datasets demonstrate the superiority of L-SSHNN. Additionally, comparative ablation studies further analyze different configurations of our model.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Liu, Yingzhe; Fu, Fangfa; Sun, Xuejian
Research on Approximate Computation of Signal Processing Algorithms for AIoT Processors Based on Deep Learning Journal Article
In: Electronics, vol. 14, no. 6, 2025, ISSN: 2079-9292.
@article{electronics14061064,
title = {Research on Approximate Computation of Signal Processing Algorithms for AIoT Processors Based on Deep Learning},
author = {Yingzhe Liu and Fangfa Fu and Xuejian Sun},
url = {https://www.mdpi.com/2079-9292/14/6/1064},
doi = {10.3390/electronics14061064},
issn = {2079-9292},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Electronics},
volume = {14},
number = {6},
abstract = {In the post-Moore era, the excessive amount of information brings great challenges to the performance of computing systems. To cope with these challenges, approximate computation has developed rapidly, which enhances the system performance with minor degradation in accuracy. In this paper, we investigate the utilization of an Artificial Intelligence of Things (AIoT) processor for approximate computing. Firstly, we employed neural architecture search (NAS) to acquire the neural network structure for approximate computation, which approximates the functions of FFT, DCT, FIR, and IIR. Subsequently, based on this structure, we quantized and trained a neural network implemented on the AI accelerator of the MAX78000 development board. To evaluate the performance, we implemented the same functions using the CMSIS-DSP library. The results demonstrate that the computational efficiency of the approximate computation on the AI accelerator is significantly higher compared to traditional DSP implementations. Therefore, the approximate computation based on AIoT devices can be effectively utilized in real-time applications.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhou, Haoxiang; Wei, Zikun; Liu, Dingbang; Zhang, Liuyang; Ding, Chenchen; Yang, Jiaqi; Mao, Wei; Yu, Hao
A Layer-wised Mixed-Precision CIM Accelerator with Bit-level Sparsity-aware ADCs for NAS-Optimized CNNs Proceedings Article
In: Proceedings of the 30th Asia and South Pacific Design Automation Conference, pp. 720–726, Association for Computing Machinery, Tokyo, Japan, 2025, ISBN: 9798400706356.
@inproceedings{10.1145/3658617.3697682,
title = {A Layer-wised Mixed-Precision CIM Accelerator with Bit-level Sparsity-aware ADCs for NAS-Optimized CNNs},
author = {Haoxiang Zhou and Zikun Wei and Dingbang Liu and Liuyang Zhang and Chenchen Ding and Jiaqi Yang and Wei Mao and Hao Yu},
url = {https://doi.org/10.1145/3658617.3697682},
doi = {10.1145/3658617.3697682},
isbn = {9798400706356},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Proceedings of the 30th Asia and South Pacific Design Automation Conference},
pages = {720–726},
publisher = {Association for Computing Machinery},
address = {Tokyo, Japan},
series = {ASPDAC '25},
abstract = {Exploring multiple precisions as well as sparsities for a computingin-memory (CIM) based convolutional accelerators is challenging. To further improve energy efficiency with minimal accuracy loss, this paper develops a neural architecture search (NAS) method to identify precision for each layer of the CNN and further leverages bit-level sparsity. The results indicate that following this approach, ResNet-18 and VGG-16 not only maintain their accuracy but also implement layer-wised mixed-precision effectively. Furthermore, there is a substantial enhancement in the bit-level sparsity of weights within each layer, with an average bit-level sparsity exceeding 90% per bit, thus providing broader possibilities for hardware-level sparsity optimization. In terms of hardware design, a mixed-precision (2/4/8-bit) readout circuit as well as a bit-level sparsity-aware Analog-to-Digital Converter (ADC) are both proposed to reduce system power consumption. Based on bit-level sparsity mixed-precision CNNs benchmarks, post-layout simulation results in 28nm reveal that the proposed accelerator achieves up to 245.72 TOPS/W energy efficiency, which shows about 2.52 – 6.57× improvement compared to the state-of-the-art SRAM-based CIM accelerators.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Yang, Fan; Abedin, Mohammad Zoynul; Hajek, Petr; Qiao, Yanan
Blockchain and Machine Learning in the Green Economy: Pioneering Carbon Neutrality Through Innovative Trading Technologies Journal Article
In: IEEE Transactions on Engineering Management, pp. 1-40, 2025.
@article{10909627,
title = {Blockchain and Machine Learning in the Green Economy: Pioneering Carbon Neutrality Through Innovative Trading Technologies},
author = {Fan Yang and Mohammad Zoynul Abedin and Petr Hajek and Yanan Qiao},
url = {https://ieeexplore.ieee.org/abstract/document/10909627},
doi = {10.1109/TEM.2025.3547730},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Engineering Management},
pages = {1-40},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Rong, Xiaobin; Wang, Dahan; Hu, Yuxiang; Zhu, Changbao; Chen, Kai; Lu, Jing
UL-UNAS: Ultra-Lightweight U-Nets for Real-Time Speech Enhancement via Network Architecture Search Miscellaneous
2025.
@misc{rong2025ulunasultralightweightunetsrealtime,
title = {UL-UNAS: Ultra-Lightweight U-Nets for Real-Time Speech Enhancement via Network Architecture Search},
author = {Xiaobin Rong and Dahan Wang and Yuxiang Hu and Changbao Zhu and Kai Chen and Jing Lu},
url = {https://arxiv.org/abs/2503.00340},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}
Wang, Qiyi; Shao, Yinning; Ma, Yunlong; Liu, Min
NodeNAS: Node-Specific Graph Neural Architecture Search for Out-of-Distribution Generalization Technical Report
2025.
@techreport{wang2025nodenasnodespecificgraphneural,
title = {NodeNAS: Node-Specific Graph Neural Architecture Search for Out-of-Distribution Generalization},
author = {Qiyi Wang and Yinning Shao and Yunlong Ma and Min Liu},
url = {https://arxiv.org/abs/2503.02448},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Chouhan, Avinash; Chutia, Dibyajyoti; Deb, Biswarup; Aggarwal, Shiv Prasad
Attention-Based Neural Architecture Search for Effective Semantic Segmentation of Satellite Images Proceedings Article
In: Noor, Arti; Saroha, Kriti; Pricop, Emil; Sen, Abhijit; Trivedi, Gaurav (Ed.): Emerging Trends and Technologies on Intelligent Systems, pp. 325–335, Springer Nature Singapore, Singapore, 2025, ISBN: 978-981-97-5703-9.
@inproceedings{10.1007/978-981-97-5703-9_28,
title = {Attention-Based Neural Architecture Search for Effective Semantic Segmentation of Satellite Images},
author = {Avinash Chouhan and Dibyajyoti Chutia and Biswarup Deb and Shiv Prasad Aggarwal},
editor = {Arti Noor and Kriti Saroha and Emil Pricop and Abhijit Sen and Gaurav Trivedi},
url = {https://link.springer.com/chapter/10.1007/978-981-97-5703-9_28},
isbn = {978-981-97-5703-9},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Emerging Trends and Technologies on Intelligent Systems},
pages = {325–335},
publisher = {Springer Nature Singapore},
address = {Singapore},
abstract = {Semantic segmentation is an important activity in satellite image analysis. The manual design and development of neural architectures for semantic segmentation is very tedious and can result in computationally heavy architectures with redundant computation. Neural architecture search (NAS) produces automated network architectures for a given task considering computational cost and other parameters. In this work, we proposed an attention-based neural architecture search (ANAS), which uses attention layers at cell levels for effective and efficient architecture design for semantic segmentation. The proposed ANAS has achieved better results than previous NAS-based work on two benchmark datasets.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Cai, Zicheng; Tang, Yaohua; Lai, Yutao; Wang, Hua; Chen, Zhi; Chen, Hao
SEKI: Self-Evolution and Knowledge Inspiration based Neural Architecture Search via Large Language Models Technical Report
2025.
@techreport{cai2025sekiselfevolutionknowledgeinspiration,
title = {SEKI: Self-Evolution and Knowledge Inspiration based Neural Architecture Search via Large Language Models},
author = {Zicheng Cai and Yaohua Tang and Yutao Lai and Hua Wang and Zhi Chen and Hao Chen},
url = {https://arxiv.org/abs/2502.20422},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Rumiantsev, Pavel; Coates, Mark
Variation Matters: from Mitigating to Embracing Zero-Shot NAS Ranking Function Variation Technical Report
2025.
@techreport{rumiantsev2025variationmattersmitigatingembracing,
title = {Variation Matters: from Mitigating to Embracing Zero-Shot NAS Ranking Function Variation},
author = {Pavel Rumiantsev and Mark Coates},
url = {https://arxiv.org/abs/2502.19657},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Ding, Zhenyang; Pu, Ninghao; Miao, Qihui; Chen, Zhiqiang; Xu, Yifan; Liu, Hao
Efficient Palm Vein Recognition Optimized by Neural Architecture Search and Hybrid Compression Proceedings Article
In: 2025 International Conference on Multi-Agent Systems for Collaborative Intelligence (ICMSCI), pp. 826-832, 2025.
@inproceedings{10894245,
title = {Efficient Palm Vein Recognition Optimized by Neural Architecture Search and Hybrid Compression},
author = {Zhenyang Ding and Ninghao Pu and Qihui Miao and Zhiqiang Chen and Yifan Xu and Hao Liu},
url = {https://ieeexplore.ieee.org/abstract/document/10894245},
doi = {10.1109/ICMSCI62561.2025.10894245},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 International Conference on Multi-Agent Systems for Collaborative Intelligence (ICMSCI)},
pages = {826-832},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Ang, Li-Minn; Su, Yuanxin; Seng, Kah Phooi; Smith, Jeremy S.
Customized Binary Convolutional Neural Networks and Neural Architecture Search on Hardware Technologies Journal Article
In: IEEE Nanotechnology Magazine, pp. 1-8, 2025.
@article{10904266,
title = {Customized Binary Convolutional Neural Networks and Neural Architecture Search on Hardware Technologies},
author = {Li-Minn Ang and Yuanxin Su and Kah Phooi Seng and Jeremy S. Smith},
url = {https://ieeexplore.ieee.org/abstract/document/10904266},
doi = {10.1109/MNANO.2025.3533937},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Nanotechnology Magazine},
pages = {1-8},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Lu, Kang-Di; Huang, Jia-Cheng; Zeng, Guo-Qiang; Chen, Min-Rong; Geng, Guang-Gang; Weng, Jian
Multi-Objective Discrete Extremal Optimization of Variable-Length Blocks-Based CNN by Joint NAS and HPO for Intrusion Detection in IIoT Journal Article
In: IEEE Transactions on Dependable and Secure Computing, pp. 1-18, 2025.
@article{10902222,
title = {Multi-Objective Discrete Extremal Optimization of Variable-Length Blocks-Based CNN by Joint NAS and HPO for Intrusion Detection in IIoT},
author = {Kang-Di Lu and Jia-Cheng Huang and Guo-Qiang Zeng and Min-Rong Chen and Guang-Gang Geng and Jian Weng},
url = {https://ieeexplore.ieee.org/abstract/document/10902222},
doi = {10.1109/TDSC.2025.3545363},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Dependable and Secure Computing},
pages = {1-18},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Chunchao; Li, Jun; Peng, Mingrui; Rasti, Behnood; Duan, Puhong; Tang, Xuebin; Ma, Xiaoguang
Low-Latency Neural Network for Efficient Hyperspectral Image Classification Journal Article
In: IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. PP, pp. 1-17, 2025.
@article{articleg,
title = {Low-Latency Neural Network for Efficient Hyperspectral Image Classification},
author = {Chunchao Li and Jun Li and Mingrui Peng and Behnood Rasti and Puhong Duan and Xuebin Tang and Xiaoguang Ma},
url = {https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10900438},
doi = {10.1109/JSTARS.2025.3544583},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
volume = {PP},
pages = {1-17},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
İlker, Günay; Özkan, İnik
SADASNet: A Selective and Adaptive Deep Architecture Search Network with Hyperparameter Optimization for Robust Skin Cancer Classification Journal Article
In: Diagnostics, vol. 15, no. 5, 2025, ISSN: 2075-4418.
@article{diagnostics15050541,
title = {SADASNet: A Selective and Adaptive Deep Architecture Search Network with Hyperparameter Optimization for Robust Skin Cancer Classification},
author = {Günay İlker and İnik Özkan},
url = {https://www.mdpi.com/2075-4418/15/5/541},
doi = {10.3390/diagnostics15050541},
issn = {2075-4418},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Diagnostics},
volume = {15},
number = {5},
abstract = {Background/Objectives: Skin cancer is a major public health concern, where early diagnosis and effective treatment are essential for prevention. To enhance diagnostic accuracy, researchers have increasingly utilized computer vision systems, with deep learning-based approaches becoming the primary focus in recent studies. Nevertheless, there is a notable research gap in the effective optimization of hyperparameters to design optimal deep learning architectures, given the need for high accuracy and lower computational complexity. Methods: This paper puts forth a robust metaheuristic optimization-based approach to develop novel deep learning architectures for multi-class skin cancer classification. This method, designated as the SADASNet (Selective and Adaptive Deep Architecture Search Network by Hyperparameter Optimization) algorithm, is developed based on the Particle Swarm Optimization (PSO) technique. The SADASNet method is adapted to the HAM10000 dataset. Innovative data augmentation techniques are applied to overcome class imbalance issues and enhance the performance of the model. The SADASNet method has been developed to accommodate a range of image sizes, and six different original deep learning models have been produced as a result. Results: The models achieved the following highest performance metrics: 99.31% accuracy, 97.58% F1 score, 97.57% recall, 97.64% precision, and 99.59% specificity. Compared to the most advanced competitors reported in the literature, the proposed method demonstrates superior performance in terms of accuracy and computational complexity. Furthermore, it maintains a broad solution space during parameter optimization. Conclusions: With these outcomes, this method aims to enhance the classification of skin cancer and contribute to the advancement of deep learning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Mecharbat, Lotfi Abdelkrim; Marchisio, Alberto; Shafique, Muhammad; Ghassemi, Mohammad M.; Alhanai, Tuka
MoENAS: Mixture-of-Expert based Neural Architecture Search for jointly Accurate, Fair, and Robust Edge Deep Neural Networks Technical Report
2025.
@techreport{mecharbat2025moenasmixtureofexpertbasedneuralb,
title = {MoENAS: Mixture-of-Expert based Neural Architecture Search for jointly Accurate, Fair, and Robust Edge Deep Neural Networks},
author = {Lotfi Abdelkrim Mecharbat and Alberto Marchisio and Muhammad Shafique and Mohammad M. Ghassemi and Tuka Alhanai},
url = {https://arxiv.org/abs/2502.07422},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}