Maintained by Difan Deng and Marius Lindauer.
The following list considers papers related to neural architecture search. It is by no means complete. If you miss a paper on the list, please let us know.
Please note that although NAS methods steadily improve, the quality of empirical evaluations in this field are still lagging behind compared to other areas in machine learning, AI and optimization. We would therefore like to share some best practices for empirical evaluations of NAS methods, which we believe will facilitate sustained and measurable progress in the field. If you are interested in a teaser, please read our blog post or directly jump to our checklist.
Transformers have gained increasing popularity in different domains. For a comprehensive list of papers focusing on Neural Architecture Search for Transformer-Based spaces, the awesome-transformer-search repo is all you need.
5555
Zhu, Huijuan; Xia, Mengzhen; Wang, Liangmin; Xu, Zhicheng; Sheng, Victor S.
A Novel Knowledge Search Structure for Android Malware Detection Journal Article
In: IEEE Transactions on Services Computing, no. 01, pp. 1-14, 5555, ISSN: 1939-1374.
@article{10750332,
title = { A Novel Knowledge Search Structure for Android Malware Detection },
author = {Huijuan Zhu and Mengzhen Xia and Liangmin Wang and Zhicheng Xu and Victor S. Sheng},
url = {https://doi.ieeecomputersociety.org/10.1109/TSC.2024.3496333},
doi = {10.1109/TSC.2024.3496333},
issn = {1939-1374},
year = {5555},
date = {5555-11-01},
urldate = {5555-11-01},
journal = {IEEE Transactions on Services Computing},
number = {01},
pages = {1-14},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {While the Android platform is gaining explosive popularity, the number of malicious software (malware) is also increasing sharply. Thus, numerous malware detection schemes based on deep learning have been proposed. However, they are usually suffering from the cumbersome models with complex architectures and tremendous parameters. They usually require heavy computation power support, which seriously limit their deployment on actual application environments with limited resources (e.g., mobile edge devices). To surmount this challenge, we propose a novel Knowledge Distillation (KD) structure—Knowledge Search (KS). KS exploits Neural Architecture Search (NAS) to adaptively bridge the capability gap between teacher and student networks in KD by introducing a parallelized student-wise search approach. In addition, we carefully analyze the characteristics of malware and locate three cost-effective types of features closely related to malicious attacks, namely, Application Programming Interfaces (APIs), permissions and vulnerable components, to characterize Android Applications (Apps). Therefore, based on typical samples collected in recent years, we refine features while exploiting the natural relationship between them, and construct corresponding datasets. Massive experiments are conducted to investigate the effectiveness and sustainability of KS on these datasets. Our experimental results show that the proposed method yields an accuracy of 97.89% to detect Android malware, which performs better than state-of-the-art solutions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Feifei; Li, Mao; Ge, Jidong; Tang, Fenghui; Zhang, Sheng; Wu, Jie; Luo, Bin
Privacy-Preserving Federated Neural Architecture Search With Enhanced Robustness for Edge Computing Journal Article
In: IEEE Transactions on Mobile Computing, no. 01, pp. 1-18, 5555, ISSN: 1558-0660.
@article{10742476,
title = { Privacy-Preserving Federated Neural Architecture Search With Enhanced Robustness for Edge Computing },
author = {Feifei Zhang and Mao Li and Jidong Ge and Fenghui Tang and Sheng Zhang and Jie Wu and Bin Luo},
url = {https://doi.ieeecomputersociety.org/10.1109/TMC.2024.3490835},
doi = {10.1109/TMC.2024.3490835},
issn = {1558-0660},
year = {5555},
date = {5555-11-01},
urldate = {5555-11-01},
journal = {IEEE Transactions on Mobile Computing},
number = {01},
pages = {1-18},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {With the development of large-scale artificial intelligence services, edge devices are becoming essential providers of data and computing power. However, these edge devices are not immune to malicious attacks. Federated learning (FL), while protecting privacy of decentralized data through secure aggregation, struggles to trace adversaries and lacks optimization for heterogeneity. We discover that FL augmented with Differentiable Architecture Search (DARTS) can improve resilience against backdoor attacks while compatible with secure aggregation. Based on this, we propose a federated neural architecture search (NAS) framwork named SLNAS. The architecture of SLNAS is built on three pivotal components: a server-side search space generation method that employs an evolutionary algorithm with dual encodings, a federated NAS process based on DARTS, and client-side architecture tuning that utilizes Gumbel softmax combined with knowledge distillation. To validate robustness, we adapt a framework that includes backdoor attacks based on trigger optimization, data poisoning, and model poisoning, targeting both model weights and architecture parameters. Extensive experiments demonstrate that SLNAS not only effectively counters advanced backdoor attacks but also handles heterogeneity, outperforming defense baselines across a wide range of backdoor attack scenarios.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Yu-Ming; Hsieh, Jun-Wei; Lee, Chun-Chieh; Fan, Kuo-Chin
RATs-NAS: Redirection of Adjacent Trails on Graph Convolutional Networks for Predictor-based Neural Architecture Search Journal Article
In: IEEE Transactions on Artificial Intelligence, vol. 1, no. 01, pp. 1-11, 5555, ISSN: 2691-4581.
@article{10685480,
title = { RATs-NAS: Redirection of Adjacent Trails on Graph Convolutional Networks for Predictor-based Neural Architecture Search },
author = {Yu-Ming Zhang and Jun-Wei Hsieh and Chun-Chieh Lee and Kuo-Chin Fan},
url = {https://doi.ieeecomputersociety.org/10.1109/TAI.2024.3465433},
doi = {10.1109/TAI.2024.3465433},
issn = {2691-4581},
year = {5555},
date = {5555-09-01},
urldate = {5555-09-01},
journal = {IEEE Transactions on Artificial Intelligence},
volume = {1},
number = {01},
pages = {1-11},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Manually designed CNN architectures like VGG, ResNet, DenseNet, and MobileNet have achieved high performance across various tasks, but design them is time-consuming and costly. Neural Architecture Search (NAS) automates the discovery of effective CNN architectures, reducing the need for experts. However, evaluating candidate architectures requires significant GPU resources, leading to the use of predictor-based NAS, such as graph convolutional networks (GCN), which is the popular option to construct predictors. However, we discover that, even though the ability of GCN mimics the propagation of features of real architectures, the binary nature of the adjacency matrix limits its effectiveness. To address this, we propose Redirection of Adjacent Trails (RATs), which adaptively learns trail weights within the adjacency matrix. Our RATs-GCN outperform other predictors by dynamically adjusting trail weights after each graph convolution layer. Additionally, the proposed Divide Search Sampling (DSS) strategy, based on the observation of cell-based NAS that architectures with similar FLOPs perform similarly, enhances search efficiency. Our RATs-NAS, which combine RATs-GCN and DSS, shows significant improvements over other predictor-based NAS methods on NASBench-101, NASBench-201, and NASBench-301.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Chen, X.; Yang, C.
CIMNet: Joint Search for Neural Network and Computing-in-Memory Architecture Journal Article
In: IEEE Micro, no. 01, pp. 1-12, 5555, ISSN: 1937-4143.
@article{10551739,
title = {CIMNet: Joint Search for Neural Network and Computing-in-Memory Architecture},
author = {X. Chen and C. Yang},
url = {https://www.computer.org/csdl/magazine/mi/5555/01/10551739/1XyKBmSlmPm},
doi = {10.1109/MM.2024.3409068},
issn = {1937-4143},
year = {5555},
date = {5555-06-01},
urldate = {5555-06-01},
journal = {IEEE Micro},
number = {01},
pages = {1-12},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Computing-in-memory (CIM) architecture has been proven to effectively transcend the memory wall bottleneck, expanding the potential of low-power and high-throughput applications such as machine learning. Neural architecture search (NAS) designs ML models to meet a variety of accuracy, latency, and energy constraints. However, integrating CIM into NAS presents a major challenge due to additional simulation overhead from the non-ideal characteristics of CIM hardware. This work introduces a quantization and device aware accuracy predictor that jointly scores quantization policy, CIM architecture, and neural network architecture, eliminating the need for time-consuming simulations in the search process. We also propose reducing the search space based on architectural observations, resulting in a well-pruned search space customized for CIM. These allow for efficient exploration of superior combinations in mere CPU minutes. Our methodology yields CIMNet, which consistently improves the trade-off between accuracy and hardware efficiency on benchmarks, providing valuable architectural insights.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Lv, Hao; Zhang, Lei; Wang, Ying
In-situ NAS: A Plug-and-Search Neural Architecture Search framework across hardware platforms Journal Article
In: IEEE Transactions on Computers, no. 01, pp. 1-14, 5555, ISSN: 1557-9956.
@article{11003207,
title = { In-situ NAS: A Plug-and-Search Neural Architecture Search framework across hardware platforms },
author = {Hao Lv and Lei Zhang and Ying Wang},
url = {https://doi.ieeecomputersociety.org/10.1109/TC.2025.3569161},
doi = {10.1109/TC.2025.3569161},
issn = {1557-9956},
year = {5555},
date = {5555-05-01},
urldate = {5555-05-01},
journal = {IEEE Transactions on Computers},
number = {01},
pages = {1-14},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Hardware-aware Neural Architecture Search (HW-NAS) has garnered significant research interest due to its ability to automate the design of neural networks for various hardware platforms. Prevalent HW-NAS frameworks often use fast predictors to estimate network performance, bypassing the time-consuming actual profiling step. However, the resource-intensive nature of building these predictors and their accuracy limitations hinder their practical use in diverse deployment scenarios. In response, we emphasize the indispensable role of actual profiling in HW-NAS and explore efficiency optimization possibilities within the HW-NAS framework. We provide a systematic analysis of profiling overhead in HW-NAS and identify many redundant and unnecessary operations during the search phase. We then optimize the workflow and present Insitu NAS, which leverages similarity features and exploration history to eliminate redundancy and improve runtime efficiency. In-situ NAS also offers simplified interfaces to ease the user’s effort in managing the complex device-dependent profiling flow, enabling plug-and-search functionality across diverse hardware platforms. Experimental results show that In-situ NAS achieves an average 10x speedup across different hardware platforms while reducing the search overhead by 8x compared to predictor-based approaches in various deployment scenarios. Additionally, In-situ NAS consistently discovers networks with better accuracy (about 1.5%) across diverse hardware platforms compared to predictor-based NAS.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Siddique, Ayesha; Hoque, Khaza Anuarul
Explainable AI-Guided Neural Architecture Search for Adversarial Robustness in Approximate DNNs Journal Article
In: IEEE Transactions on Sustainable Computing, no. 01, pp. 1-15, 5555, ISSN: 2377-3782.
@article{10966055,
title = { Explainable AI-Guided Neural Architecture Search for Adversarial Robustness in Approximate DNNs },
author = {Ayesha Siddique and Khaza Anuarul Hoque},
url = {https://doi.ieeecomputersociety.org/10.1109/TSUSC.2025.3561603},
doi = {10.1109/TSUSC.2025.3561603},
issn = {2377-3782},
year = {5555},
date = {5555-04-01},
urldate = {5555-04-01},
journal = {IEEE Transactions on Sustainable Computing},
number = {01},
pages = {1-15},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Deep neural networks are lucrative targets of adversarial attacks and approximate deep neural networks (AxDNNs) are no exception. Searching manually for adversarially robust AxDNN architectures incurs outrageous time and human effort. In this paper, we propose XAI-NAS, an explainable neural architecture search (NAS) method that leverages explainable artificial intelligence (XAI) to efficiently co-optimize the adversarial robustness and hardware efficiency of AxDNN architectures on systolic-array hardware accelerators. During the NAS process, AxDNN architectures are evolved layer-wise with heterogeneous approximate multipliers to deliver the best trade-offs between adversarial robustness, energy consumption, latency, and memory footprint. The most suitable approximate multipliers are automatically selected from an open-source Evoapprox8b library. Our extensive evaluations provide a set of Pareto optimal hardware efficient and adversarially robust solutions. For example, a Pareto-optimal DNN AxDNN for the MNIST and CIFAR-10 datasets exhibits up to 1.5× higher adversarial robustness, 2.1× less energy consumption, 4.39× reduced latency, and 2.37× low memory footprint when compared to the state-of-the-art NAS approaches.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Dong, Yukang; Pan, Fanxing; Gui, Yi; Jiang, Wenbin; Wan, Yao; Zheng, Ran; Jin, Hai
Comprehensive Architecture Search for Deep Graph Neural Networks Journal Article
In: IEEE Transactions on Big Data, no. 01, pp. 1-15, 5555, ISSN: 2332-7790.
@article{10930718,
title = { Comprehensive Architecture Search for Deep Graph Neural Networks },
author = {Yukang Dong and Fanxing Pan and Yi Gui and Wenbin Jiang and Yao Wan and Ran Zheng and Hai Jin},
url = {https://doi.ieeecomputersociety.org/10.1109/TBDATA.2025.3552336},
doi = {10.1109/TBDATA.2025.3552336},
issn = {2332-7790},
year = {5555},
date = {5555-03-01},
urldate = {5555-03-01},
journal = {IEEE Transactions on Big Data},
number = {01},
pages = {1-15},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {In recent years, Neural Architecture Search (NAS) has emerged as a promising approach for automatically discovering superior model architectures for deep Graph Neural Networks (GNNs). Different methods have paid attention to different types of search spaces. However, due to the time-consuming nature of training deep GNNs, existing NAS methods often fail to explore diverse search spaces sufficiently, which constrains their effectiveness. To crack this hard nut, we propose CAS-DGNN, a novel comprehensive architecture search method for deep GNNs. It encompasses four kinds of search spaces that are the composition of aggregate and update operators, different types of aggregate operators, residual connections, and hyper-parameters. To meet the needs of such a complex situation, a phased and hybrid search strategy is proposed to accommodate the diverse characteristics of different search spaces. Specifically, we divide the search process into four phases, utilizing evolutionary algorithms and Bayesian optimization. Meanwhile, we design two distinct search methods for residual connections (All-connected search and Initial Residual search) to streamline the search space, which enhances the scalability of CAS-DGNN. The experimental results show that CAS-DGNN achieves higher accuracy with competitive search costs across ten public datasets compared to existing methods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yan, J.; Liu, J.; Xu, H.; Wang, Z.; Qiao, C.
Peaches: Personalized Federated Learning with Neural Architecture Search in Edge Computing Journal Article
In: IEEE Transactions on Mobile Computing, no. 01, pp. 1-17, 5555, ISSN: 1558-0660.
@article{10460163,
title = {Peaches: Personalized Federated Learning with Neural Architecture Search in Edge Computing},
author = {J. Yan and J. Liu and H. Xu and Z. Wang and C. Qiao},
doi = {10.1109/TMC.2024.3373506},
issn = {1558-0660},
year = {5555},
date = {5555-03-01},
urldate = {5555-03-01},
journal = {IEEE Transactions on Mobile Computing},
number = {01},
pages = {1-17},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {In edge computing (EC), federated learning (FL) enables numerous distributed devices (or workers) to collaboratively train AI models without exposing their local data. Most works of FL adopt a predefined architecture on all participating workers for model training. However, since workers' local data distributions vary heavily in EC, the predefined architecture may not be the optimal choice for every worker. It is also unrealistic to manually design a high-performance architecture for each worker, which requires intense human expertise and effort. In order to tackle this challenge, neural architecture search (NAS) has been applied in FL to automate the architecture design process. Unfortunately, the existing federated NAS frameworks often suffer from the difficulties of system heterogeneity and resource limitation. To remedy this problem, we present a novel framework, termed Peaches, to achieve efficient searching and training in the resource-constrained EC system. Specifically, the local model of each worker is stacked by base cell and personal cell, where the base cell is shared by all workers to capture the common knowledge and the personal cell is customized for each worker to fit the local data. We determine the number of base cells, shared by all workers, according to the bandwidth budget on the parameters server. Besides, to relieve the data and system heterogeneity, we find the optimal number of personal cells for each worker based on its computing capability. In addition, we gradually prune the search space during training to mitigate the resource consumption. We evaluate the performance of Peaches through extensive experiments, and the results show that Peaches can achieve an average accuracy improvement of about 6.29% and up to 3.97× speed up compared with the baselines.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Sun, Genchen; Liu, Zhengkun; Gan, Lin; Su, Hang; Li, Ting; Zhao, Wenfeng; Sun, Biao
SpikeNAS-Bench: Benchmarking NAS Algorithms for Spiking Neural Network Architecture Journal Article
In: IEEE Transactions on Artificial Intelligence, vol. 1, no. 01, pp. 1-12, 5555, ISSN: 2691-4581.
@article{10855683,
title = { SpikeNAS-Bench: Benchmarking NAS Algorithms for Spiking Neural Network Architecture },
author = {Genchen Sun and Zhengkun Liu and Lin Gan and Hang Su and Ting Li and Wenfeng Zhao and Biao Sun},
url = {https://doi.ieeecomputersociety.org/10.1109/TAI.2025.3534136},
doi = {10.1109/TAI.2025.3534136},
issn = {2691-4581},
year = {5555},
date = {5555-01-01},
urldate = {5555-01-01},
journal = {IEEE Transactions on Artificial Intelligence},
volume = {1},
number = {01},
pages = {1-12},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {In recent years, Neural Architecture Search (NAS) has marked significant advancements, yet its efficacy is marred by the dependence on substantial computational resources. To mitigate this, the development of NAS benchmarks has emerged, offering datasets that enumerate all potential network architectures and their performances within a predefined search space. Nonetheless, these benchmarks predominantly focus on convolutional architectures, which are criticized for their limited interpretability and suboptimal hardware efficiency. Recognizing the untapped potential of Spiking Neural Networks (SNNs) — often hailed as the third generation of neural networks for their biological realism and computational thrift — this study introduces SpikeNAS-Bench. As a pioneering benchmark for SNN, SpikeNAS-Bench utilizes a cell-based search space, integrating leaky integrate-and-fire (LIF) neurons with variable thresholds as candidate operations. It encompasses 15,625 candidate architectures, rigorously evaluated on CIFAR10, CIFAR100 and Tiny-ImageNet datasets. This paper delves into the architectural nuances of SpikeNAS-Bench, leveraging various criteria to underscore the benchmark’s utility and presenting insights that could steer future NAS algorithm designs. Moreover, we assess the benchmark’s consistency through three distinct proxy types: zero-cost-based, early-stop-based, and predictor-based proxies. Additionally, the paper benchmarks seven contemporary NAS algorithms to attest to SpikeNAS-Bench’s broad applicability. We commit to providing training logs, diagnostic data for all candidate architectures, and the promise to release all code and datasets post-acceptance, aiming to catalyze further exploration and innovation within the SNN domain. SpikeNAS-Bench is open source at https://github.com/XXX (hidden for double anonymous review).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Changlin; Lin, Sihao; Tang, Tao; Wang, Guangrun; Li, Mingjie; Li, Zhihui; Chang, Xiaojun
BossNAS Family: Block-wisely Self-supervised Neural Architecture Search Journal Article
In: IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 01, pp. 1-15, 5555, ISSN: 1939-3539.
@article{10839629,
title = { BossNAS Family: Block-wisely Self-supervised Neural Architecture Search },
author = {Changlin Li and Sihao Lin and Tao Tang and Guangrun Wang and Mingjie Li and Zhihui Li and Xiaojun Chang},
url = {https://doi.ieeecomputersociety.org/10.1109/TPAMI.2025.3529517},
doi = {10.1109/TPAMI.2025.3529517},
issn = {1939-3539},
year = {5555},
date = {5555-01-01},
urldate = {5555-01-01},
journal = {IEEE Transactions on Pattern Analysis & Machine Intelligence},
number = {01},
pages = {1-15},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Recent advances in hand-crafted neural architectures for visual recognition underscore the pressing need to explore architecture designs comprising diverse building blocks. Concurrently, neural architecture search (NAS) methods have gained traction as a means to alleviate human efforts. Nevertheless, the question of whether NAS methods can efficiently and effectively manage diversified search spaces featuring disparate candidates, such as Convolutional Neural Networks (CNNs) and transformers, remains an open question. In this work, we introduce a novel unsupervised NAS approach called BossNAS (Block-wisely Self-supervised Neural Architecture Search), which aims to address the problem of inaccurate predictive architecture ranking caused by a large weight-sharing space while mitigating potential ranking issue caused by biased supervision. To achieve this, we factorize the search space into blocks and introduce a novel self-supervised training scheme called Ensemble Bootstrapping, to train each block separately in an unsupervised manner. In the search phase, we propose an unsupervised Population-Centric Search, optimizing the candidate architecture towards the population center. Additionally, we enhance our NAS method by integrating masked image modeling and present BossNAS++ to overcome the lack of dense supervision in our block-wise self-supervised NAS. In BossNAS++, we introduce the training technique named Masked Ensemble Bootstrapping for block-wise supernet, accompanied by a Masked Population-Centric Search scheme to promote fairer architecture selection. Our family of models, discovered through BossNAS and BossNAS++, delivers impressive results across various search spaces and datasets. Our transformer model discovered by BossNAS++ attains a remarkable accuracy of 83.2% on ImageNet with only 10.5B MAdds, surpassing DeiT-B by 1.4% while maintaining a lower computation cost. Moreover, our approach excels in architecture rating accuracy, achieving Spearman correlations of 0.78 and 0.76 on the canonical MBConv search space with ImageNet and the NATS-Bench size search space with CIFAR-100, respectively, outperforming state-of-the-art NAS methods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2026
Xie, Yu; Chang, Yu; Li, Ming; Qin, A. K.; Zhang, Xialei
AutoSGRL: Automated framework construction for self-supervised graph representation learning Journal Article
In: Neural Networks, vol. 194, pp. 108119, 2026, ISSN: 0893-6080.
@article{XIE2026108119,
title = {AutoSGRL: Automated framework construction for self-supervised graph representation learning},
author = {Yu Xie and Yu Chang and Ming Li and A. K. Qin and Xialei Zhang},
url = {https://www.sciencedirect.com/science/article/pii/S0893608025009992},
doi = {https://doi.org/10.1016/j.neunet.2025.108119},
issn = {0893-6080},
year = {2026},
date = {2026-01-01},
urldate = {2026-01-01},
journal = {Neural Networks},
volume = {194},
pages = {108119},
abstract = {Automated machine learning (AutoML) is a promising solution for building a machine learning framework without human assistance and has attracted significant attention throughout the computational intelligence research community. Although there has been an emerging interest in graph neural architecture search, current research focuses on the specific design of semi-supervised or supervised graph neural networks. Motivated by this, we propose a novel method that enables the automatic construction of flexible self-supervised graph representation learning frameworks for the first time as far as we know, referred to as AutoSGRL. Based on existing self-supervised graph contrastive learning methods, AutoSGRL establishes a framework search space for self-supervised graph representation learning, which encompasses data augmentation strategies and proxy tasks for constructing graph contrastive learning frameworks, and the hyperparameters required for model training. Then, we implement an automatic search engine based on genetic algorithms, which constructs multiple self-supervised graph representation learning frameworks as the initial population. By simulating the process of biological evolution including selection, crossover, and mutation, the search engine iteratively evolves the population to identify high-performed frameworks and optimal hyperparameters. Empirical studies demonstrate that our AutoSGRL achieves comparative or even better performance than state-of-the-art manual-designed self-supervised graph representation learning methods and semi-supervised graph neural architecture search methods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Xie, Weisheng; Gao, Xiangxiang; Fang, Xuwei; Li, Hui; Hang, Chen; Li, Shaoyuan
EQUINAS: Equilibrium-guided differentiable neural architecture search Journal Article
In: Expert Systems with Applications, vol. 298, pp. 129711, 2026, ISSN: 0957-4174.
@article{XIE2026129711,
title = {EQUINAS: Equilibrium-guided differentiable neural architecture search},
author = {Weisheng Xie and Xiangxiang Gao and Xuwei Fang and Hui Li and Chen Hang and Shaoyuan Li},
url = {https://www.sciencedirect.com/science/article/pii/S0957417425033263},
doi = {https://doi.org/10.1016/j.eswa.2025.129711},
issn = {0957-4174},
year = {2026},
date = {2026-01-01},
urldate = {2026-01-01},
journal = {Expert Systems with Applications},
volume = {298},
pages = {129711},
abstract = {Recent research has significantly mitigated the performance collapse issue in Differentiable Architecture Search (DARTS) by either refining architecture parameters to better reflect the true strengths of operations or developing alternative metrics for evaluating operation significance. However, the actual role and impact of architecture parameters remain insufficiently explored, creating critical ambiguities in the search process. To address this gap, we conduct a rigorous theoretical analysis demonstrating that the change rate of architecture parameters reflects the sensitivity of the supernet’s validation loss in architecture space, thereby influencing the derived architecture’s performance by shaping supernet training dynamics. Building on these insights, we introduce the concept of a Stable Equilibrium State to capture the stability of the bi-level optimization process and propose the Equilibrium Influential (EI) metric to assess operation importance. By integrating these elements, we propose EQUINAS, a differentiable NAS approach that leverages the Stable Equilibrium State to identify the optimal state during the search process and derives the final architecture using the EI metric. Extensive experiments across diverse datasets and search spaces demonstrate that EQUINAS achieves competitive test accuracy compared to state-of-the-art methods while significantly reducing search costs. Additionally, EQUINAS shows remarkable performance in Transformer-based architectures and excels in real-world applications such as image classification and text recognition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Chen, Weiduo; Dong, Xiaoshe; Wang, Qiang
DP-SWAP: Fast Swapping Strategy Based on Dynamic Programming Journal Article
In: Future Generation Computer Systems, vol. 175, pp. 108071, 2026, ISSN: 0167-739X.
@article{CHEN2026108071,
title = {DP-SWAP: Fast Swapping Strategy Based on Dynamic Programming},
author = {Weiduo Chen and Xiaoshe Dong and Qiang Wang},
url = {https://www.sciencedirect.com/science/article/pii/S0167739X25003656},
doi = {https://doi.org/10.1016/j.future.2025.108071},
issn = {0167-739X},
year = {2026},
date = {2026-01-01},
urldate = {2026-01-01},
journal = {Future Generation Computer Systems},
volume = {175},
pages = {108071},
abstract = {Neural Architecture Search (NAS) has emerged as an effective approach for automating neural network design. However, NAS imposes significant GPU memory pressure due to the need to evaluate numerous candidate models during training. While tensor swapping helps reduce memory usage, existing tensor selection methods rely on extensive iterative searches, which require repeatedly traversing model computation graphs to evaluate the impact of swapping schemes–leading to high time complexity and poor scalability in dynamic NAS scenarios. To address this issue, we propose DP-SWAP, a novel tensor swapping strategy based on dynamic programming. By leveraging the optimal substructure property of the tensor selection problem, DP-SWAP computes effective swapping schemes with only O(n) time complexity, allows for fast and adaptive decision-making during NAS model exploration. Experimental results show that DP-SWAP achieves training performance comparable to state-of-the-art heuristic methods, while reducing swapping decision time by over 3 orders of magnitude, thus effectively alleviating GPU memory bottlenecks in NAS.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Jian; Zhu, Yunlong; Dong, Zhicheng; Yang, Hucheng; Cheng, Xudong; Xue, Zhenyu
A lightweight arc fault detection model integrating multi-objective architecture search with dynamic noise-augmented training Journal Article
In: Measurement, vol. 257, pp. 118649, 2026, ISSN: 0263-2241.
@article{LI2026118649,
title = {A lightweight arc fault detection model integrating multi-objective architecture search with dynamic noise-augmented training},
author = {Jian Li and Yunlong Zhu and Zhicheng Dong and Hucheng Yang and Xudong Cheng and Zhenyu Xue},
url = {https://www.sciencedirect.com/science/article/pii/S0263224125020081},
doi = {https://doi.org/10.1016/j.measurement.2025.118649},
issn = {0263-2241},
year = {2026},
date = {2026-01-01},
urldate = {2026-01-01},
journal = {Measurement},
volume = {257},
pages = {118649},
abstract = {Ensuring accurate and efficient arc fault detection is critical for the safety and reliability of modern electrical systems, particularly in embedded and resource-constrained environments. This paper presents a lightweight convolutional neural network (CNN) model optimized through a multi-objective genetic algorithm (NSGA-II) to achieve a balance between detection accuracy, computational complexity, and noise robustness. The proposed model integrates Squeeze-and-Excitation (SE) attention mechanisms, depthwise separable convolutions, and dynamic Gaussian noise augmentation during training to enhance generalization under noisy conditions. Neural architecture search is employed to automatically design compact yet high-performing architectures, with the final model achieving an F1-score of 99.49 % using only 2529 parameters. The model is validated experimentally on a Raspberry Pi 4B platform, demonstrating an average inference time of 0.785 ms per sample, thereby confirming its real-time detection capability. This study offers a robust, efficient, and practical solution for arc fault diagnosis in embedded industrial applications.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yang, Yeming; Liu, Zhihao; Wong, Ka-Chun; Lin, Qiuzhen; Luo, Jianping; Li, Jianqiang
Evolutionary multi-task robust architecture search for network intrusion detection Journal Article
In: Expert Systems with Applications, vol. 296, pp. 128899, 2026, ISSN: 0957-4174.
@article{YANG2026128899,
title = {Evolutionary multi-task robust architecture search for network intrusion detection},
author = {Yeming Yang and Zhihao Liu and Ka-Chun Wong and Qiuzhen Lin and Jianping Luo and Jianqiang Li},
url = {https://www.sciencedirect.com/science/article/pii/S0957417425025163},
doi = {https://doi.org/10.1016/j.eswa.2025.128899},
issn = {0957-4174},
year = {2026},
date = {2026-01-01},
urldate = {2026-01-01},
journal = {Expert Systems with Applications},
volume = {296},
pages = {128899},
abstract = {Network Intrusion Detection (NID) becomes a key technology for ensuring network security. Recent researchers have proposed various NID systems based on neural networks. However, these networks require expensive expert knowledge for manual design, which is tedious and time-consuming. Moreover, they easily suffer from adversarial attacks, which limits their application in safety-critical scenarios. To alleviate the above problems, this paper proposes an evolutionary multi-task robust architecture search method, called EMR-NID, which can automatically design robust architectures for NID systems. First, we design an architecture transfer update strategy that achieves information sharing and knowledge transfer between different tasks. Then, we develop an architecture performance correction strategy that enhances the efficiency of robust search and strengthens NID’s defense capability. Finally, our EMR-NID method is validated on three well-known NID datasets, i.e., NSL-KDD, UNSW-NB15, and Edge-IIoTset. The experimental results show that EMR-NID can outperform some state-of-the-art NID methods in terms of clean and robust accuracy under multiple scenarios.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Poyser, Matt; Breckon, Toby P.
DDS-NAS: Dynamic data selection within neural architecture search via on-line hard example mining applied to image classification Journal Article
In: Pattern Recognition, vol. 169, pp. 111948, 2026, ISSN: 0031-3203.
@article{POYSER2026111948,
title = {DDS-NAS: Dynamic data selection within neural architecture search via on-line hard example mining applied to image classification},
author = {Matt Poyser and Toby P. Breckon},
url = {https://www.sciencedirect.com/science/article/pii/S0031320325006089},
doi = {https://doi.org/10.1016/j.patcog.2025.111948},
issn = {0031-3203},
year = {2026},
date = {2026-01-01},
urldate = {2026-01-01},
journal = {Pattern Recognition},
volume = {169},
pages = {111948},
abstract = {In order to address the scalability challenge within Neural Architecture Search (NAS), we speed up NAS training via dynamic hard example mining within a curriculum learning framework. By utilising an autoencoder that enforces an image similarity embedding in latent space, we construct an efficient kd-tree structure to order images by furthest neighbour dissimilarity in a low-dimensional embedding. From a given query image from our subsample dataset, we can identify the most dissimilar image within the global dataset in logarithmic time. Via curriculum learning, we then dynamically re-formulate an unbiased subsample dataset for NAS optimisation, upon which the current NAS solution architecture performs poorly. We show that our DDS-NAS framework speeds up gradient-based NAS strategies by up to 27× without loss in performance. By maximising the contribution of each image sample during training, we reduce the duration of a NAS training cycle and the number of iterations required for convergence.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2025
Wang, Weiqi; Bao, Feilong; Xing, Zhecong; Lian, Zhe
A Survey: Research Progress of Feature Fusion Technology Journal Article
In: 2025.
@article{wangsurvey,
title = {A Survey: Research Progress of Feature Fusion Technology},
author = {Weiqi Wang and Feilong Bao and Zhecong Xing and Zhe Lian},
url = {http://poster-openaccess.com/files/ICIC2024/862.pdf},
year = {2025},
date = {2025-12-01},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
(Ed.)
MACHINE-GENERATED NEURAL NETWORKS FOR SHORT-TERM LOAD FORECASTING Collection
2025.
@collection{nokey,
title = { MACHINE-GENERATED NEURAL NETWORKS FOR SHORT-TERM LOAD FORECASTING},
author = {Gergana Vacheva and Plamen Stanchev and Nikolay Hinov
},
url = {https://unitechsp.tugab.bg/images/2024/1-EE/s1_p143_v1.pdf},
year = {2025},
date = {2025-12-01},
urldate = {2025-12-01},
booktitle = {International Scientific Conference UNITECH`2024},
journal = {International Scientific Conference UNITECH`2024},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
Taha, Mohamed A.; Saafan, Mahmoud M.; Ayyad, Sarah M.
Revisiting natural selection: evolving dynamic neural networks using genetic algorithms for complex control tasks Journal Article
In: Artificial Intelligence Review , 2025.
@article{Taha-air25a,
title = {Revisiting natural selection: evolving dynamic neural networks using genetic algorithms for complex control tasks},
author = {
Mohamed A. Taha and Mahmoud M. Saafan and Sarah M. Ayyad
},
url = {https://link.springer.com/article/10.1007/s10462-025-11382-9},
year = {2025},
date = {2025-09-16},
urldate = {2025-09-16},
journal = {Artificial Intelligence Review },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Lin, Shuaifei; Zhang, Wei; Xu, Nannan; Liu, Xueli; Wu, Jianfeng
Automatic design of CNN architecture based on genetic algorithm and particle swarm optimization Journal Article
In: Evolving Systems , 2025.
@article{lin-es25a,
title = {Automatic design of CNN architecture based on genetic algorithm and particle swarm optimization},
author = {
Shuaifei Lin and Wei Zhang and Nannan Xu and Xueli Liu and Jianfeng Wu
},
url = {https://link.springer.com/article/10.1007/s12530-025-09738-1},
year = {2025},
date = {2025-09-09},
journal = { Evolving Systems },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Wang, Lianhua; Xu, Meilin; Chen, Jiawen; Li, Yucheng; Shen, Zhou
Joint Structure-Function Neural Architecture Optimization under Resource Constraints Technical Report
2025.
@techreport{nokey,
title = {Joint Structure-Function Neural Architecture Optimization under Resource Constraints},
author = {Lianhua Wang and Meilin Xu and Jiawen Chen and Yucheng Li and Zhou Shen},
url = {https://www.researchsquare.com/article/rs-7460435/v1},
year = {2025},
date = {2025-09-02},
urldate = {2025-09-02},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Lüder, Leonardo Alessandro; Ivanov, Mikhail; Franco, Ramon Fuentes
Learning optimal CNN architectures for Precipitation Downscaling with Differentiable Architecture Search Journal Article
In: Authorea, 2025.
@article{nokey,
title = {Learning optimal CNN architectures for Precipitation Downscaling with Differentiable Architecture Search},
author = { Leonardo Alessandro Lüder and Mikhail Ivanov and Ramon Fuentes Franco},
url = {https://www.authorea.com/users/956412/articles/1325379-learning-optimal-cnn-architectures-for-precipitation-downscaling-with-differentiable-architecture-search},
doi = {10.22541/au.175580225.56700263/v1},
year = {2025},
date = {2025-09-01},
journal = {Authorea},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Chen, Jiale; Le, Duc Van; Li, Yuanchun; Liu, Yunxin; Tan, Rui
TimelyNet: Adaptive Neural Architecture for Autonomous Driving with Dynamic Deadline Journal Article
In: ACM Trans. Embed. Comput. Syst., vol. 24, no. 5s, 2025, ISSN: 1539-9087.
@article{10.1145/3762652,
title = {TimelyNet: Adaptive Neural Architecture for Autonomous Driving with Dynamic Deadline},
author = {Jiale Chen and Duc Van Le and Yuanchun Li and Yunxin Liu and Rui Tan},
url = {https://doi.org/10.1145/3762652},
doi = {10.1145/3762652},
issn = {1539-9087},
year = {2025},
date = {2025-09-01},
urldate = {2025-09-01},
journal = {ACM Trans. Embed. Comput. Syst.},
volume = {24},
number = {5s},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
abstract = {To maintain driving safety, the execution of neural network-based autonomous driving pipelines must meet the dynamic deadlines in response to the changing environment and vehicle’s velocity. To this end, this article proposes a real-time neural architecture adaptation approach, called TimelyNet, which uses a supernet to replace the most compute-intensive neural network module in an existing end-to-end autonomous driving pipeline. From the supernet, TimelyNet samples subnets with varying inference latency levels to meet the dynamic deadlines during run-time driving without fine-tuning. Specifically, TimelyNet employs a one-shot prediction method that jointly uses a lookup table and an invertible neural network to periodically determine the optimal hyperparameters of a subnet to meet its execution deadline while achieving the highest possible accuracy. The lookup table stores multiple subnet architectures with different latencies, while the invertible neural network models the distribution of the optimal subnet architecture given the latency. Extensive evaluation based on hardware-in-the-loop CARLA simulations shows that TimelyNet-integrated driving pipelines achieve the best driving safety, characterized by the lowest wrong-lane driving rate and zero collisions, compared with several baselines, including the state-of-the-art driving pipelines.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Wang, Hongjiang; Zhang, Tian; Liu, Jinsheng; Ren, Na; Dai, Qin
Large language model-based neural architecture search for efficient hydro-turbine fault detection Journal Article
In: Discover Computing , 2025.
@article{wang-dc25a,
title = {Large language model-based neural architecture search for efficient hydro-turbine fault detection},
author = {
Hongjiang Wang and Tian Zhang and Jinsheng Liu and Na Ren and Qin Dai
},
url = {https://link.springer.com/article/10.1007/s10791-025-09711-1},
year = {2025},
date = {2025-08-31},
urldate = {2025-08-31},
journal = {Discover Computing },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Shanchuan; Tsukayama, Daisuke; Shirakashi, Jun-ichi; Shibuya, Tetsuo; Imai, Hiroshi
Quantum architecture search with neural predictor based on ZX-calculus Journal Article
In: EPJ Quantum Technology, 2025.
@article{nokey,
title = {Quantum architecture search with neural predictor based on ZX-calculus},
author = {
Shanchuan Li and Daisuke Tsukayama and Jun-ichi Shirakashi and Tetsuo Shibuya and Hiroshi Imai
},
url = {https://epjquantumtechnology.springeropen.com/articles/10.1140/epjqt/s40507-025-00410-w},
year = {2025},
date = {2025-08-31},
urldate = {2025-08-31},
journal = { EPJ Quantum Technology},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Gehlot, Naveen; Kumar, Rajesh; Hans, Surender; Nkomozepi, Pilani
1D Convolutional Neural Architecture Search for sEMG Hand Gesture Recognition Journal Article
In: SN Computer Science, 2025.
@article{Gehlot-sncs25a,
title = {1D Convolutional Neural Architecture Search for sEMG Hand Gesture Recognition},
author = {
Naveen Gehlot and Rajesh Kumar and Surender Hans and Pilani Nkomozepi
},
url = {https://link.springer.com/article/10.1007/s42979-025-04324-3},
year = {2025},
date = {2025-08-31},
urldate = {2025-08-31},
journal = { SN Computer Science},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
(Ed.)
TFAS: zero-shot NAS for general time-series analysis with time-frequency aware scoring Collection
2025.
@collection{nokey,
title = {TFAS: zero-shot NAS for general time-series analysis with time-frequency aware scoring},
author = {
Patara Trirat and Jae-Gil Lee
},
url = {https://link.springer.com/article/10.1007/s10994-025-06832-y},
year = {2025},
date = {2025-08-29},
urldate = {2025-08-29},
booktitle = { ECML PKDD 2025 },
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
BISEN, DHANANJAY; SAURABH, PRANEET; THAKUR, MAYANK; CHAUBEY, GYANENDRA; SINGH, UPENDRA; DUBEY, ADITYA
Genetic Algorithm-Based Search Space Exploration to Generate Best Convolutional Neural Network Journal Article
In: IEEE Access , 2025.
@article{nokey,
title = {Genetic Algorithm-Based Search Space Exploration to Generate Best Convolutional Neural Network},
author = {DHANANJAY BISEN and PRANEET SAURABH and MAYANK THAKUR and GYANENDRA CHAUBEY and UPENDRA SINGH and ADITYA DUBEY},
url = {https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=11151578},
year = {2025},
date = {2025-08-20},
urldate = {2025-08-20},
journal = {IEEE Access },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Al-Saadi, Muna; Al-Saadi, Bushra; Farhan, Dheyauldeen Ahmed; Hassen, Oday Ali
Optimizing Neural Network Architectures with TensorFlow and Keras for Scalable Deep Learning Journal Article
In: Journal of Intelligent Systems and Internet of Things , 2025.
@article{nokey,
title = { Optimizing Neural Network Architectures with TensorFlow and Keras for Scalable Deep Learning },
author = { Muna Al-Saadi and Bushra Al-Saadi and Dheyauldeen Ahmed Farhan and Oday Ali Hassen},
url = { https://doi.org/10.54216/JISIoT.180108},
year = {2025},
date = {2025-08-15},
urldate = {2025-08-15},
journal = {Journal of Intelligent Systems and Internet of Things },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Lyu, Zimeng
2025.
@phdthesis{nokey,
title = {Online and Offline Multi-Variate Time Series Forecasting with NeuroEvolution Based Neural Architecture Search},
author = {Zimeng Lyu},
url = {https://www.proquest.com/docview/3238530156?pq-origsite=gscholar&fromopenview=true&sourcetype=Dissertations%20&%20Theses},
year = {2025},
date = {2025-08-15},
urldate = {2025-08-15},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Gao, Xiang
Toward Real-Time and Efficient Edge Intelligence: Advances and Challenges in Lightweight Machine Learning Journal Article
In: Science and Technology of Engineering, Chemistry and Environmental Protection, vol. 1, no. 4, 2025.
@article{nokey,
title = { Toward Real-Time and Efficient Edge Intelligence: Advances and Challenges in Lightweight Machine Learning },
author = {Xiang Gao},
url = {https://lseee.net/index.php/te/article/view/822},
year = {2025},
date = {2025-08-02},
urldate = {2025-08-02},
journal = {Science and Technology of Engineering, Chemistry and Environmental Protection},
volume = {1},
number = {4},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Müller, Erik Vanegas; Joe-Oshodi, Arese; He, Liang; andMauricio Villarroel, Abhirup Banerjee
Deep Learning Optimisation for Sports Cardiology: Neural Architecture Search-driven Arrhythmia Classification Technical Report
2025.
@techreport{nokey,
title = {Deep Learning Optimisation for Sports Cardiology: Neural Architecture Search-driven Arrhythmia Classification},
author = {Erik Vanegas Müller and Arese Joe-Oshodi and Liang He and Abhirup Banerjee andMauricio Villarroel},
url = {https://cinc.org/2025/Program/accepted/140_Preprint.pdf},
year = {2025},
date = {2025-08-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
(Ed.)
AUTOMATIC CLASSIFICATION OF CHAINS OF GUITAR EFFECTS THROUGH EVOLUTIONARY NEURAL ARCHITECTURE SEARCH Collection
2025.
@collection{nokey,
title = {AUTOMATIC CLASSIFICATION OF CHAINS OF GUITAR EFFECTS THROUGH EVOLUTIONARY NEURAL ARCHITECTURE SEARCH},
author = {Michele Rossi and Giovanni Iacca and Luca Turchet},
url = {https://dafx.de/paper-archive/2025/DAFx25_paper_16.pdf},
year = {2025},
date = {2025-08-01},
urldate = {2025-08-01},
booktitle = {Proceedings of the 28th International Conference on Digital Audio Effects (DAFx25) },
journal = {Proceedings of the 28th International Conference on Digital Audio Effects (DAFx25) A},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
Ma, Quangong; Hao, Chaolong; Si, NianWen; Qu, Dan
Quantum architecture search for optimizing quantum generators in quantum GAN Journal Article
In: Mach. Learn.: Sci. Technol. , 2025.
@article{ma-mlst25a,
title = {Quantum architecture search for optimizing quantum generators in quantum GAN},
author = {Quangong Ma and Chaolong Hao and NianWen Si and Dan Qu},
url = {https://iopscience.iop.org/article/10.1088/2632-2153/ae056d},
year = {2025},
date = {2025-08-01},
urldate = {2025-08-01},
journal = { Mach. Learn.: Sci. Technol. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
He, Zhimin; Li, Zhengjiang; Situ, Haozhen; Li, Qin; Shi, Jinjing; Li, Lvzhou
Adaptive fusion of training-free proxies for quantum architecture search Journal Article
In: Phys. Rev. Appl., vol. 24, iss. 2, pp. 024074, 2025.
@article{rbhx-3fjd,
title = {Adaptive fusion of training-free proxies for quantum architecture search},
author = {Zhimin He and Zhengjiang Li and Haozhen Situ and Qin Li and Jinjing Shi and Lvzhou Li},
url = {https://link.aps.org/doi/10.1103/rbhx-3fjd},
doi = {10.1103/rbhx-3fjd},
year = {2025},
date = {2025-08-01},
urldate = {2025-08-01},
journal = {Phys. Rev. Appl.},
volume = {24},
issue = {2},
pages = {024074},
publisher = {American Physical Society},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
RODRIGUEZ, JOSE MARIA LANCHO
Experiential neural architecture selection: dynamic cross-layer memory for real-time inference optimization Technical Report
2025.
@techreport{nokey,
title = {Experiential neural architecture selection: dynamic cross-layer memory for real-time inference optimization},
author = {JOSE MARIA LANCHO RODRIGUEZ},
url = {https://www.researchsquare.com/article/rs-7378044/v1},
year = {2025},
date = {2025-08-01},
urldate = {2025-08-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Silva, Ricardo Martins Abreu; Silva, Andersson Alves
Heuristics in Design of Deep Neural Networks Book Chapter
In: Martí, Rafael; Pardalos, Panos M.; Resende, Mauricio G. C. (Ed.): Handbook of Heuristics, pp. 1–55, Springer Nature Switzerland, Cham, 2025, ISBN: 978-3-319-07153-4.
@inbook{deAbreuSilva2016,
title = {Heuristics in Design of Deep Neural Networks},
author = {Ricardo Martins Abreu Silva and Andersson Alves Silva},
editor = {Rafael Martí and Panos M. Pardalos and Mauricio G. C. Resende},
url = {https://doi.org/10.1007/978-3-319-07153-4_74-1},
doi = {10.1007/978-3-319-07153-4_74-1},
isbn = {978-3-319-07153-4},
year = {2025},
date = {2025-08-01},
urldate = {2021-01-17},
booktitle = {Handbook of Heuristics},
pages = {1–55},
publisher = {Springer Nature Switzerland},
address = {Cham},
abstract = {The complexity of Deep Neural Networks (DNNs) has driven advancements in Neural Architecture Search (NAS), Hyperparameter Optimization (HPO), and Learning Rule Optimization (LRO). This study reviews heuristic methodologies, focusing on Evolutionary Algorithms (EAs) and Swarm Intelligence (SI). We analyze Genetic Algorithms (GAs), Particle Swarm Optimization (PSO), and Multi-Objective Optimization (MOO), emphasizing the Biased Random Key Genetic Algorithm (BRKGA). BRKGA encodes neural architectures and hyperparameters as continuous vectors, enhancing search efficiency in NAS and HPO. We evaluate BRKGA on Feedforward Neural Networks (FNNs), Convolutional Neural Networks (CNNs), and Graph Neural Networks (GNNs), demonstrating their effectiveness in tuning learning rates, dropout rates, and batch sizes. Additionally, we explore its role in LRO, optimizing adaptive weight updates and gradient modulation. Experiments on benchmark datasets show that BRKGA consistently yields promising architectures and hyperparameter configurations, balancing accuracy, efficiency, and adaptability. Our findings highlight BRKGA as a viable alternative for NAS, HPO, and LRO, particularly in complex search spaces where structured exploration is essential. Finally, challenges in heuristic-driven NAS, HPO, and AutoML are examined, along with future research directions in scalable optimization, adaptive learning mechanisms, and neuromorphic computing.},
keywords = {},
pubstate = {published},
tppubtype = {inbook}
}
Huang, Hongtao
Efficient Deep Learning: Model Design and Algorithmic Innovation PhD Thesis
2025.
@phdthesis{nokey,
title = { Efficient Deep Learning: Model Design and Algorithmic Innovation},
author = {Huang, Hongtao
},
url = {https://unsworks.unsw.edu.au/entities/publication/07383e96-3c1d-4018-a662-9d659a0bbabb},
year = {2025},
date = {2025-08-01},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Al-Saadi, Muna; Al-Saadi, Bushra; Farhan, Dheyauldeen Ahmed; Hassen, Oday Ali
Journal of Intelligent Systems and Internet of Things Journal Article
In: Journal of Intelligent Systems and Internet of Things, 2025.
@article{Saadi-iot25a,
title = {Journal of Intelligent Systems and Internet of Things},
author = { Muna Al-Saadi and Bushra Al-Saadi and Dheyauldeen Ahmed Farhan and Oday Ali Hassen},
url = {https://doi.org/10.54216/JISIoT},
year = {2025},
date = {2025-08-01},
urldate = {2025-08-01},
journal = {Journal of Intelligent Systems and Internet of Things},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Huang, Junhao
Improving Evolutionary Neural Architecture Search: Flexibility, Compactness, and Efficiency PhD Thesis
2025.
@phdthesis{nokey,
title = {Improving Evolutionary Neural Architecture Search: Flexibility, Compactness, and Efficiency},
author = {Junhao Huang},
url = {https://doi.org/10.26686/wgtn.29905886},
year = {2025},
date = {2025-08-01},
urldate = {2025-08-01},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Whitmore, James; Hastings, Clara; Patel, Amir; Brody, Stephany
Efficient Inference of Large Language Models through Model Compression Journal Article
In: Preprints, 2025.
@article{202508.0192,
title = {Efficient Inference of Large Language Models through Model Compression},
author = {James Whitmore and Clara Hastings and Amir Patel and Stephany Brody},
url = {https://doi.org/10.20944/preprints202508.0192.v1},
doi = {10.20944/preprints202508.0192.v1},
year = {2025},
date = {2025-08-01},
urldate = {2025-08-01},
journal = {Preprints},
publisher = {Preprints},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Trirat, Patara
Neural architecture search for time-series analysis with context-specific performance estimators PhD Thesis
2025.
@phdthesis{nokey,
title = { Neural architecture search for time-series analysis with context-specific performance estimators},
author = { Trirat, Patara},
url = {https://koasas.kaist.ac.kr/handle/10203/331476#},
year = {2025},
date = {2025-07-31},
urldate = {2025-07-31},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Sheehan, Matthew; Yakimenko, Oleg
Neural architecture search applying optimal stopping theory Journal Article
In: Sec. Machine Learning and Artificial Intelligence, 2025.
@article{Sheehan-mlai25a,
title = {Neural architecture search applying optimal stopping theory},
author = {Matthew Sheehan and Oleg Yakimenko },
url = {https://doi.org/10.3389/frai.2025.1643088},
year = {2025},
date = {2025-07-31},
urldate = {2025-07-31},
journal = {Sec. Machine Learning and Artificial Intelligence},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Ajmal, Muhammad Hassan; Qureshi, Musadiq Ahmed; Waleed, Muhammad; Ibrar, Muhammad; Iqbal, Muhammad Waseem; Muhammad, Hafiz Abdul Basit
Accelerating Local AI Chatbot Processing Speed: Model Evaluation and Advanced Algorithmic Enhancements Using Machine Learning Journal Article
In: Annual Methodological Archive Research Review , 2025.
@article{nokey,
title = { Accelerating Local AI Chatbot Processing Speed: Model Evaluation and Advanced Algorithmic Enhancements Using Machine Learning },
author = { Muhammad Hassan Ajmal and Musadiq Ahmed Qureshi and Muhammad Waleed and Muhammad Ibrar and Muhammad Waseem Iqbal and Hafiz Abdul Basit Muhammad },
url = {https://www.amresearchreview.com/index.php/Journal/article/view/471},
year = {2025},
date = {2025-07-31},
urldate = {2025-07-31},
journal = {Annual Methodological Archive Research Review },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Islam, Tanvir
Mycelium neural architecture search Journal Article
In: Evolutionary Intelligence , 2025.
@article{Myceliumneuralarchitecturesearch,
title = {Mycelium neural architecture search},
author = {Tanvir Islam },
url = {https://link.springer.com/article/10.1007/s12065-025-01077-z},
year = {2025},
date = {2025-07-31},
urldate = {2025-07-31},
journal = {Evolutionary Intelligence },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Alshahrani, Rami Ayied; Khanzada, Tariq Jamil Saifullah
Improved Crime Prediction Using Hybrid Neural Architecture Search Together with Hyperparameter Tuning Journal Article
In: International Journal of Computational Intelligence Systems , 2025.
@article{Alshahrani-ijcis25a,
title = {Improved Crime Prediction Using Hybrid Neural Architecture Search Together with Hyperparameter Tuning},
author = {Rami Ayied Alshahrani and Tariq Jamil Saifullah Khanzada
},
url = {https://link.springer.com/article/10.1007/s44196-025-00888-3},
year = {2025},
date = {2025-07-28},
urldate = {2025-07-28},
journal = { International Journal of Computational Intelligence Systems },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Hakiki, Racha Ikram; Azerine, Abdennour; Tlemsani, Redouane; Golabi, Mahmoud; Idoumghar, Lhassane
Enhancing IoT intrusion detection with genetic algorithm-optimized convolutional neural networks Journal Article
In: The Journal of Supercomputing , 2025.
@article{Hakiki-jsc25a,
title = {Enhancing IoT intrusion detection with genetic algorithm-optimized convolutional neural networks},
author = {Racha Ikram Hakiki and Abdennour Azerine and Redouane Tlemsani and Mahmoud Golabi and Lhassane Idoumghar
},
url = {https://link.springer.com/article/10.1007/s11227-025-07626-8},
year = {2025},
date = {2025-07-25},
urldate = {2025-07-25},
journal = {The Journal of Supercomputing },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Yuhao; Xiao, Hanmin; Du, Meng; Liu, Qingjie; Tao, Jingwei; Luo, Yongcheng; Peng, Li; Tan, Jianbo
Application of neural architecture search in lithology identification Journal Article
In: Journal of Petroleum Exploration and Production Technology , 2025.
@article{nokey,
title = {Application of neural architecture search in lithology identification},
author = {
Yuhao Zhang and Hanmin Xiao and Meng Du and Qingjie Liu and Jingwei Tao and Yongcheng Luo and Li Peng and Jianbo Tan
},
url = {https://link.springer.com/article/10.1007/s13202-025-02039-y},
year = {2025},
date = {2025-07-23},
urldate = {2025-07-23},
journal = { Journal of Petroleum Exploration and Production Technology },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Bisen, Dhananjay; Saurabh, Praneet; Dubey, Aditya; Jaiswal, Varshali
Advancing Neural Architecture Search Through an Innovative Genetic Algorithm with Inverted Swap Crossover Journal Article
In: National Academy Science Letters , 2025.
@article{Bisen-nasc25a,
title = {Advancing Neural Architecture Search Through an Innovative Genetic Algorithm with Inverted Swap Crossover},
author = {
Dhananjay Bisen and Praneet Saurabh and Aditya Dubey and Varshali Jaiswal
},
url = {https://link.springer.com/article/10.1007/s40009-025-01733-z},
year = {2025},
date = {2025-07-18},
urldate = {2025-07-18},
journal = {National Academy Science Letters },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Salinas, Luis Ignacio Ferro; Castillo, Esteban; Hernández, Víctor Adrián Sosa
Toward the Use of Neural Architecture Search: A Baseline Approach for Solving the Authorship Verification Problem Journal Article
In: SN Computer Science , 2025.
@article{nokey,
title = {Toward the Use of Neural Architecture Search: A Baseline Approach for Solving the Authorship Verification Problem},
author = {
Luis Ignacio Ferro Salinas and Esteban Castillo and Víctor Adrián Sosa Hernández
},
url = {https://link.springer.com/article/10.1007/s42979-025-04172-1},
year = {2025},
date = {2025-07-16},
urldate = {2025-07-16},
journal = {SN Computer Science },
keywords = {},
pubstate = {published},
tppubtype = {article}
}