Maintained by Difan Deng and Marius Lindauer.
The following list considers papers related to neural architecture search. It is by no means complete. If you miss a paper on the list, please let us know.
Please note that although NAS methods steadily improve, the quality of empirical evaluations in this field are still lagging behind compared to other areas in machine learning, AI and optimization. We would therefore like to share some best practices for empirical evaluations of NAS methods, which we believe will facilitate sustained and measurable progress in the field. If you are interested in a teaser, please read our blog post or directly jump to our checklist.
Transformers have gained increasing popularity in different domains. For a comprehensive list of papers focusing on Neural Architecture Search for Transformer-Based spaces, the awesome-transformer-search repo is all you need.
1990
Kitano, Hiroaki
Designing Neural Networks Using Genetic Algorithms with Graph Generation System Journal Article
In: vol. 4, no. 4, 1990.
@article{Kitano1990_uli,
title = {Designing Neural Networks Using Genetic Algorithms with Graph Generation System},
author = {Hiroaki Kitano},
url = {http://www.complex-systems.com/abstracts/v04_i04_a06/},
year = {1990},
date = {1990-01-01},
volume = {4},
number = {4},
key = {journals/compsys/Kitano90},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
1988
Tenorio, Manoel Fernando; Lee, Wei-Tsih
Self Organizing Neural Networks for the Identification Problem Proceedings Article
In: pp. 57-64, 1988.
@inproceedings{Tenorio1988_rex,
title = {Self Organizing Neural Networks for the Identification Problem},
author = {Manoel Fernando Tenorio and Wei-Tsih Lee},
url = {https://papers.nips.cc/paper/149-self-organizing-neural-networks-for-the-identification-problem},
year = {1988},
date = {1988-01-01},
pages = {57-64},
key = {conf/nips/TenorioL88},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
0000
Waris, Faisal; Reynolds, Robert G.; Lee, Joonho
Evolving Deep Neural Networks with Cultural Algorithms for Real-Time Industrial Applications Journal Article
In: International Journal of Semantic Computing, vol. 0, no. 0, pp. 1-32, 0000.
@article{doi:10.1142/S1793351X22500027,
title = {Evolving Deep Neural Networks with Cultural Algorithms for Real-Time Industrial Applications},
author = {Faisal Waris and Robert G. Reynolds and Joonho Lee},
url = {https://doi.org/10.1142/S1793351X22500027},
doi = {10.1142/S1793351X22500027},
year = {0000},
date = {0000-01-01},
urldate = {0000-01-01},
journal = {International Journal of Semantic Computing},
volume = {0},
number = {0},
pages = {1-32},
abstract = {The goal of this paper is to investigate the applicability of evolutionary algorithms to the design of real-time industrial controllers. Present-day “deep learning” (DL) is firmly established as a useful tool for addressing many practical problems. This has spurred the development of neural architecture search (NAS) methods in order to automate the model search activity. CATNeuro is a NAS algorithm based on the graph evolution concept devised by Neuroevolution of Augmenting Topologies (NEAT) but propelled by cultural algorithm (CA) as the evolutionary driver. The CA is a network-based, stochastic optimization framework inspired by problem solving in human cultures. Knowledge distribution (KD) across the network of graph models is a key to problem solving success in CAT systems. Two alternative mechanisms for KD across the network are employed. One supports cooperation (CATNeuro) in the network and the other competition (WM). To test the viability of each configuration prior to use in the industrial setting, they were applied to the design of a real-time controller for a two-dimensional fighting game. While both were able to beat the AI program that came with the fighting game, the cooperative method performed statistically better. As a result, it was used to track the motion of a trailer (in lateral and vertical directions) using a camera mounted on the tractor vehicle towing the trailer. In this second real-time application (trailer motion), the CATNeuro configuration was compared to the original NEAT (elitist) method of evolution. CATNeuro is found to perform statistically better than NEAT in many aspects of the design including model training loss, model parameter size, and overall model structure consistency. In both scenarios, the performance improvements were attributed to the increased model diversity due to the interaction of CA knowledge sources both cooperatively and competitively.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Simsek, Ozlem Imik; Alagoz, Baris Baykant
In: Transactions of the Institute of Measurement and Control, vol. 0, no. 0, pp. 01423312221119648, 0000.
@article{doi:10.1177/01423312221119648,
title = {Optimal architecture artificial neural network model design with exploitative alpha gray wolf optimization for soft calibration of CO concentration measurements in electronic nose applications},
author = {Ozlem Imik Simsek and Baris Baykant Alagoz},
url = {https://doi.org/10.1177/01423312221119648},
doi = {10.1177/01423312221119648},
year = {0000},
date = {0000-01-01},
urldate = {0000-01-01},
journal = {Transactions of the Institute of Measurement and Control},
volume = {0},
number = {0},
pages = {01423312221119648},
abstract = {The low-cost and small size solid-state sensor arrays are suitable to implement a wide-area electronic nose (e-nose) for real-time air quality monitoring. However, accuracy of these low-cost sensors is not adequate for precise measurements of pollutant concentrations. Artificial neural network (ANN) estimation models are used for the soft calibration of low-cost sensor array measurements and significantly improve the accuracy of low-cost multi-sensor measurements. However, optimality of neural architecture affects the performance of ANN estimation models, and optimization of the ANN architecture for a training data set is essential to improve data-driven modeling performance of ANNs to reach optimal neural complexity and improved generalization. In this study, an optimal architecture ANN estimator design scheme is suggested to improve the estimation performance of ANN models for e-nose applications. To this end, a gray wolf optimization (GWO) algorithm is modified, and an exploitative alpha gray wolf optimization (EA-GWO) algorithm is suggested. This modification enhances local exploitation skill of the best alpha gray wolf search agent, and thus allows the fine-tuning of ANN architectures by minimizing a multi-objective cost function that implements mean error search policy. Experimental study demonstrates the effectiveness of optimal architecture ANN models to estimate CO concentration from the low-cost multi-sensor data.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Soniya,; Singh, Lotika; Paul, Sandeep
Hybrid evolutionary network architecture search (HyENAS) for convolution class of deep neural networks with applications Journal Article
In: Expert Systems, vol. n/a, no. n/a, pp. e12690, 0000.
@article{https://doi.org/10.1111/exsy.12690,
title = {Hybrid evolutionary network architecture search (HyENAS) for convolution class of deep neural networks with applications},
author = {Soniya and Lotika Singh and Sandeep Paul},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/exsy.12690},
doi = {https://doi.org/10.1111/exsy.12690},
journal = {Expert Systems},
volume = {n/a},
number = {n/a},
pages = {e12690},
abstract = {Abstract Convolutional Neural Networks (CNNs) and its variants are increasingly used across wide domain of applications achieving high performance measures. For high performance, application specific CNN architecture is required, hence the need for network architecture search (NAS) becomes essential. This paper proposes a hybrid evolutionary approach for network architecture search (HyENAS), and targets convolution class of neural networks. One of the significant contribution of this technique is to completely evolve the high performance network by simultaneously finding network structures and their corresponding parameters. An elegant string representation has been proposed which efficiently represents the network. The concept of sparse block evolving requisite layer wise features for dense network is deployed. This permits the network to dynamically evolve for a specific application. In comparison to the other state-of-art methods, the high performance of the proposed HyENAS approach is demonstrated across various benchmark data sets belonging to the domain of malariology, oncology, neurology, ophthalmology, and genomics. Further, to deploy the proposed model on lower hardware specification devices, another salient feature of the HyENAS technique is to seamlessly sift out the simpler network architecture with comparable accuracy.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Bao Feng; Zhou, Guo Qiang
Control the number of skip-connects to improve robustness of the NAS algorithm Journal Article
In: IET Computer Vision, vol. n/a, no. n/a, 0000.
@article{https://doi.org/10.1049/cvi2.12036,
title = {Control the number of skip-connects to improve robustness of the NAS algorithm},
author = {Bao Feng Zhang and Guo Qiang Zhou},
url = {https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/cvi2.12036},
doi = {https://doi.org/10.1049/cvi2.12036},
journal = {IET Computer Vision},
volume = {n/a},
number = {n/a},
abstract = {Abstract Recently, the gradient-based neural architecture search has made remarkable progress with the characteristics of high efficiency and fast convergence. However, two common problems in the gradient-based NAS algorithms are found. First, with the increase in the raining time, the NAS algorithm tends to skip-connect operation, leading to performance degradation and instability results. Second, another problem is no reasonable allocation of computing resources on valuable candidate network models. The above two points lead to the difficulty in searching the optimal sub-network and poor stability. To address them, the trick of pre-training the super-net is applied, so that each operation has an equal opportunity to develop its strength, which provides a fair competition condition for the convergence of the architecture parameters. In addition, a skip-controller is proposed to ensure each sampled sub-network with an appropriate number of skip-connects. The experiments were performed on three mainstream datasets CIFAR-10, CIFAR-100 and ImageNet, in which the improved method achieves comparable results with higher accuracy and stronger robustness.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Shashirangana, Jithmi; Padmasiri, Heshan; Meedeniya, Dulani; Perera, Charith; Nayak, Soumya R; Nayak, Janmenjoy; Vimal, Shanmuganthan; Kadry, Seifidine
License plate recognition using neural architecture search for edge devices Journal Article
In: International Journal of Intelligent Systems, vol. n/a, no. n/a, 0000.
@article{https://doi.org/10.1002/int.22471,
title = {License plate recognition using neural architecture search for edge devices},
author = {Jithmi Shashirangana and Heshan Padmasiri and Dulani Meedeniya and Charith Perera and Soumya R Nayak and Janmenjoy Nayak and Shanmuganthan Vimal and Seifidine Kadry},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/int.22471},
doi = {https://doi.org/10.1002/int.22471},
journal = {International Journal of Intelligent Systems},
volume = {n/a},
number = {n/a},
abstract = {Abstract The mutually beneficial blend of artificial intelligence with internet of things has been enabling many industries to develop smart information processing solutions. The implementation of technology enhanced industrial intelligence systems is challenging with the environmental conditions, resource constraints and safety concerns. With the era of smart homes and cities, domains like automated license plate recognition (ALPR) are exploring automate tasks such as traffic management and fraud detection. This paper proposes an optimized decision support solution for ALPR that works purely on edge devices at night-time. Although ALPR is a frequently addressed research problem in the domain of intelligent systems, still they are generally computationally intensive and unable to run on edge devices with limited resources. Therefore, as a novel approach, we consider the complex aspects related to deploying lightweight yet efficient and fast ALPR models on embedded devices. The usability of the proposed models is assessed in real-world with a proof-of-concept hardware design and achieved competitive results to the state-of-the-art ALPR solutions that run on server-grade hardware with intensive resources.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Wang, Xingbin; Zhao, Boyan; Hou, Rui; Awad, Amro; Tian, Zhihong; Meng, Dan
NASGuard: A Novel Accelerator Architecture for Robust Neural Architecture Search (NAS) Networks Proceedings Article
In: 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), 0000.
@inproceedings{WangISCA2021,
title = {NASGuard: A Novel Accelerator Architecture for Robust Neural Architecture Search (NAS) Networks},
author = {Xingbin Wang and Boyan Zhao and Rui Hou and Amro Awad and Zhihong Tian and Dan Meng},
url = {https://conferences.computer.org/iscapub/pdfs/ISCA2021-4ghucdBnCWYB7ES2Pe4YdT/333300a776/333300a776.pdf},
booktitle = {2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA)},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Zhang, Jianwei; Li, Dong; Wang, Lituan; Zhang, Lei
One-Shot Neural Architecture Search by Dynamically Pruning Supernet in Hierarchical Order Journal Article
In: International Journal of Neural Systems, 0000, (PMID: 34128778).
@article{doi:10.1142/S0129065721500295,
title = {One-Shot Neural Architecture Search by Dynamically Pruning Supernet in Hierarchical Order},
author = {Jianwei Zhang and Dong Li and Lituan Wang and Lei Zhang},
url = {https://doi.org/10.1142/S0129065721500295},
doi = {10.1142/S0129065721500295},
journal = {International Journal of Neural Systems},
abstract = {Neural Architecture Search (NAS), which aims at automatically designing neural architectures, recently draw a growing research interest. Different from conventional NAS methods, in which a large number of neural architectures need to be trained for evaluation, the one-shot NAS methods only have to train one supernet which synthesizes all the possible candidate architectures. As a result, the search efficiency could be significantly improved by sharing the supernet’s weights during the candidate architectures’ evaluation. This strategy could greatly speed up the search process but suffer a challenge that the evaluation based on sharing weights is not predictive enough. Recently, pruning the supernet during the search has been proven to be an efficient way to alleviate this problem. However, the pruning direction in complex-structured search space remains unexplored. In this paper, we revisited the role of path dropout strategy, which drops the neural operations instead of the neurons, in supernet training, and several interesting characters of the supernet trained with dropout are found. Based on the observations, a Hierarchically-Ordered Pruning Neural Architecture Search (HOPNAS) algorithm is proposed by dynamically pruning the supernet with a proper pruning direction. Experimental results indicate that our method is competitive with state-of-the-art approaches on CIFAR10 and ImageNet.},
note = {PMID: 34128778},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Xiao; Lei, Lin; Kuang, Gangyao
Multi-Modal Fusion Architecture Search for Land over Classification using Heterogeneous Remote Sensing Images Technical Report
0000.
@techreport{Li2021,
title = {Multi-Modal Fusion Architecture Search for Land over Classification using Heterogeneous Remote Sensing Images },
author = {Xiao Li and Lin Lei and Gangyao Kuang},
url = {https://www.researchgate.net/profile/Xiao-Li-120/publication/353236680_MULTI-MODAL_FUSION_ARCHITECTURE_SEARCH_FOR_LAND_COVER_CLASSIFICATION_USING_HETEROGENEOUS_REMOTE_SENSING_IMAGES/links/60eeae6316f9f31300802de4/MULTI-MODAL-FUSION-ARCHITECTURE-SEARCH-FOR-LAND-COVER-CLASSIFICATION-USING-HETEROGENEOUS-REMOTE-SENSING-IMAGES.pdf},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Sapra, Dolly; Pimentel, Andy D.
Designing convolutional neural networks with constrained evolutionary piecemeal training Journal Article
In: Applied Intelligence , 0000.
@article{Sapra2021,
title = {Designing convolutional neural networks with constrained evolutionary piecemeal training},
author = {Dolly Sapra and Andy D. Pimentel },
url = {https://link.springer.com/article/10.1007/s10489-021-02679-7},
journal = {Applied Intelligence },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Artin, Javad; Valizadeh, Amin; Ahmadi, Mohsen; Kumar, Sathish A. P.; Sharifi, Abbas
In: Complexity, 0000.
@article{Artin21,
title = {Presentation of a Novel Method for Prediction of Traffic with Climate Condition Based on Ensemble Learning of Neural Architecture Search (NAS) and Linear Regression},
author = {Javad Artin and Amin Valizadeh and Mohsen Ahmadi and Sathish A. P. Kumar and Abbas Sharifi },
url = {https://doi.org/10.1155/2021/8500572},
journal = {Complexity},
abstract = {Traffic prediction is critical to expanding a smart city and country because it improves urban planning and traffic management. This prediction is very challenging due to the multifactorial and random nature of traffic. This study presented a method based on ensemble learning to predict urban traffic congestion based on weather criteria. We used the NAS algorithm, which in the output based on heuristic methods creates an optimal model concerning input data. We had 400 data, which included the characteristics of the day’s weather, including six features: absolute humidity, dew point, visibility, wind speed, cloud height, and temperature, which in the final column is the urban traffic congestion target. We have analyzed linear regression with the results obtained in the project; this method was more efficient than other regression models. This method had an error of 0.00002 in terms of MSE criteria and SVR, random forest, and MLP methods, which have error values of 0.01033, 0.00003, and 0.0011, respectively. According to the MAE criterion, this method has a value of 0.0039. The other methods have obtained values of 0.0850, 0.0045, and 0.027, respectively, which show that our proposed model has a minor error than other methods and has been able to outpace the other models.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Gong, Yunhong; Sun, Yanan; Peng, Dezhong; Chen, Peng; Yan, Zhongtai; Yang, Ke
Analyze COVID-19 CT images based on evolutionary algorithm with dynamic searching space Journal Article
In: Complex & Intelligent Systems , 0000.
@article{Gong2021,
title = {Analyze COVID-19 CT images based on evolutionary algorithm with dynamic searching space},
author = {Yunhong Gong and Yanan Sun and Dezhong Peng and Peng Chen and Zhongtai Yan and Ke Yang },
url = {https://link.springer.com/article/10.1007/s40747-021-00513-8},
journal = {Complex & Intelligent Systems },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Cheng, Hsin-Pai
Efficient and Generalizable Neural Architecture Search for Visual Recognition PhD Thesis
0000.
@phdthesis{ChengPhD2021,
title = {Efficient and Generalizable Neural Architecture Search for Visual Recognition},
author = {Hsin-Pai Cheng},
url = {https://dukespace.lib.duke.edu/dspace/bitstream/handle/10161/23808/Cheng_duke_0066D_16412.pdf},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Zhu, Xunyu; Li, Jian; Liu, Yong; Liao, Jun; Wang, Weiping
Operation-level Progressive Differentiable Architecture Search Technical Report
0000.
@techreport{ZhuDARTS2021,
title = {Operation-level Progressive Differentiable Architecture Search},
author = {Xunyu Zhu and Jian Li and Yong Liu and Jun Liao and Weiping Wang},
url = {https://gsai.ruc.edu.cn/uploads/20210924/52c916158c2b3d29015ca71d85484c27.pdf},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Huang, Sian-Yao; Chu, Wei-Ta
OSNASLib: One-Shot NAS Library Proceedings Article
In: ICCV 2021 Workshop on Neural Architectures: Past, Present and Future, 0000.
@inproceedings{HuangISNASLib2021,
title = {OSNASLib: One-Shot NAS Library},
author = {Sian-Yao Huang and Wei-Ta Chu},
url = {https://neural-architecture-ppf.github.io/papers/00010.pdf},
booktitle = {ICCV 2021 Workshop on Neural Architectures: Past, Present and Future},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Madhu, G.; Bharadwaj, B. Lalith; Boddeda, Rohit; Vardhan, Sai; Kautish, K. Sandeep; Alnowibet, Khalid; Alrasheedi, Adel F.; Mohamed, Ali Wagdy
Deep Stacked Ensemble Learning Model for COVID-19 Classification Technical Report
0000.
@techreport{Madhu2021,
title = {Deep Stacked Ensemble Learning Model for COVID-19 Classification},
author = {G. Madhu and B. Lalith Bharadwaj and Rohit Boddeda and Sai Vardhan and K. Sandeep Kautish and Khalid Alnowibet and Adel F. Alrasheedi and Ali Wagdy Mohamed},
url = {https://www.researchgate.net/profile/B-Lalith-Bharadwaj/publication/355180470_Deep_Stacked_Ensemble_Learning_Model_for_COVID-19_Classification/links/6164cb470bf51d4817768880/Deep-Stacked-Ensemble-Learning-Model-for-COVID-19-Classification.pdf},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Kandukuri, Nikhil; Sakhtivel, Sangeetha; Xie, Pengtao
Neural Architecture Search For Skin Cancer Detection Technical Report
0000.
@techreport{Kandukuri2021,
title = {Neural Architecture Search For Skin Cancer Detection},
author = {Nikhil Kandukuri and Sangeetha Sakhtivel and Pengtao Xie},
url = {https://assets.researchsquare.com/files/rs-953342/v1_covered.pdf?c=1633976839},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Jiang, Yingying; Gan, Zhuoxin; Lin, Ke; A, Yong
AttNAS: Searching Attentions for Lightweight Semantic Segmentation Proceedings Article
In: British Machine Vision Conference (BMVC) 2021, 0000.
@inproceedings{Jiang2021,
title = {AttNAS: Searching Attentions for Lightweight Semantic Segmentation},
author = {Yingying Jiang and Zhuoxin Gan and Ke Lin and Yong A},
url = {https://www.bmvc2021-virtualconference.com/assets/papers/0575.pdf},
booktitle = {British Machine Vision Conference (BMVC) 2021},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Loni, Mohammad; Mousavi, Hamid; Riazati, Mohammad; Daneshtalab, Masoud; Sjödin, Mikael
TAS : Ternarized Neural Architecture Search for Resource-Constrained Edge Devices Proceedings Article
In: Design, Automation and Test in Europe ConferenceDesign, Automation and Test in Europe Conference (DATE) 2022, ANTWERP, BELGIUM :, 0000.
@inproceedings{Loni1620831,
title = {TAS : Ternarized Neural Architecture Search for Resource-Constrained Edge Devices},
author = {Mohammad Loni and Hamid Mousavi and Mohammad Riazati and Masoud Daneshtalab and Mikael Sjödin},
url = {https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1620831&dswid=-1720},
booktitle = {Design, Automation and Test in Europe ConferenceDesign, Automation and Test in Europe Conference (DATE) 2022, ANTWERP, BELGIUM :},
institution = {Shahid Bahonar University of Kerman, Iran},
abstract = {Ternary Neural Networks (TNNs) compress network weights and activation functions into 2-bit representation resulting in remarkable network compression and energy efficiency. However, there remains a significant gap in accuracy between TNNs and full-precision counterparts. Recent advances in Neural Architectures Search (NAS) promise opportunities in automated optimization for various deep learning tasks. Unfortunately, this area is unexplored for optimizing TNNs. This paper proposes TAS, a framework that drastically reduces the accuracy gap between TNNs and their full-precision counterparts by integrating quantization into the network design. We experienced that directly applying NAS to the ternary domain provides accuracy degradation as the search settings are customized for full-precision networks. To address this problem, we propose (i) a new cell template for ternary networks with maximum gradient propagation; and (ii) a novel learnable quantizer that adaptively relaxes the ternarization mechanism from the distribution of the weights and activation functions. Experimental results reveal that TAS delivers 2.64% higher accuracy and 2.8x memory saving over competing methods with the same bit-width resolution on the CIFAR-10 dataset. These results suggest that TAS is an effective method that paves the way for the efficient design of the next generation of quantized neural networks.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Ma, Zhiyuan; Yu, Wenting; Zhang, Peng; Huang, Zhi; Lin, Anni; Xia, Yan
LPI Radar Waveform Recognition Based on Neural Architecture Search Journal Article
In: Computational Intelligence and Neuroscience, vol. 2022, 0000.
@article{Ma2022,
title = {LPI Radar Waveform Recognition Based on Neural Architecture Search},
author = {Zhiyuan Ma and Wenting Yu and Peng Zhang and Zhi Huang and Anni Lin and Yan Xia},
url = {https://doi.org/10.1155/2022/4628481},
journal = {Computational Intelligence and Neuroscience},
volume = {2022},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Salam, Hanan; Manoranjan, Viswonathan; Jian, Jiang; Celiktutan, Oya
Learning Personalised Models for Automatic Self-Reported Personality Recognition Technical Report
0000.
@techreport{Salam2022,
title = {Learning Personalised Models for Automatic Self-Reported Personality Recognition},
author = {Hanan Salam and Viswonathan Manoranjan and Jiang Jian and Oya Celiktutan},
url = {https://nyuscholars.nyu.edu/ws/files/139000086/Proceedings_ICCV_2021_Understanding_Social_Behavior_in_Dyadic_and_Small_Group_Interactions_Challenge_8.pdf},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Dhanaraja, Mayur; Dob, Huyen; Nairb, Dinesh; Xub, Cong
Leveraging Tensor Methods in Neural Architecture Search for the automatic development of lightweight Convolutional Neural Networks Journal Article
In: 0000.
@article{dhanarajaleveraging,
title = {Leveraging Tensor Methods in Neural Architecture Search for the automatic development of lightweight Convolutional Neural Networks},
author = {Mayur Dhanaraja and Huyen Dob and Dinesh Nairb and Cong Xub},
url = {https://assets.amazon.science/34/29/484392a6450b8af9b7646fe6db60/leveraging-tensor-methods-in-neural-architecture-search-for-the-automatic-development-of-lightweight-convolutional-neural-networks.pdf},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yang, Chengrun; Bender, Gabriel; Liu, Hanxiao; Kindermans, Pieter-Jan; Udell, Madeleine; Lu, Yifeng; Le, Quoc; Huang, Da
TabNAS: Rejection Sampling for Neural Architecture Search on Tabular Datasets Technical Report
0000.
@techreport{https://doi.org/10.48550/arxiv.2204.07615,
title = {TabNAS: Rejection Sampling for Neural Architecture Search on Tabular Datasets},
author = {Chengrun Yang and Gabriel Bender and Hanxiao Liu and Pieter-Jan Kindermans and Madeleine Udell and Yifeng Lu and Quoc Le and Da Huang},
url = {https://arxiv.org/abs/2204.07615},
doi = {10.48550/ARXIV.2204.07615},
publisher = {arXiv},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Hao, Yao; Zhang, Xizhe; Wang, Jie; Zhao, Tianyu; Sun, Baozhou
Improvement of IMRT QA prediction using imaging-based neural architecture search Journal Article
In: Medical Physics, vol. n/a, no. n/a, 0000.
@article{https://doi.org/10.1002/mp.15694,
title = {Improvement of IMRT QA prediction using imaging-based neural architecture search},
author = {Yao Hao and Xizhe Zhang and Jie Wang and Tianyu Zhao and Baozhou Sun},
url = {https://aapm.onlinelibrary.wiley.com/doi/abs/10.1002/mp.15694},
doi = {https://doi.org/10.1002/mp.15694},
journal = {Medical Physics},
volume = {n/a},
number = {n/a},
abstract = {Abstract Purpose Machine learning (ML) has been used to predict the gamma passing rate (GPR) of intensity-modulated radiation therapy (IMRT) QA results. In this work, we applied a novel neural architecture search to automatically tune and search for the best deep neural networks instead of using hand-designed deep learning architectures. Method and materials One hundred and eighty-two IMRT plans were created and delivered with portal dosimetry. A total of 1497 fields for multiple treatment sites were delivered and measured by portal imagers. Gamma criteria of 2%/2 mm with a 5% threshold were used. Fluence maps calculated for each plan were used as inputs to a convolution neural network (CNN). Auto-Keras was implemented to search for the best CNN architecture for fluence image regression. The network morphism was adopted in the searching process, in which the base models were ResNet and DenseNet. The performance of this CNN approach was compared with tree-based ML models previously developed for this application, using the same dataset. Results The deep-learning-based approach had 98.3% of predictions within 3% of the measured 2%/2-mm GPRs with a maximum error of 3.1% and a mean absolute error of less than 1%. Our results show that this novel architecture search approach achieves comparable performance to the machine-learning-based approaches with handcrafted features. Conclusions We implemented a novel CNN model using imaging-based neural architecture for IMRT QA prediction. The imaging-based deep-learning method does not require a manual extraction of relevant features and is able to automatically select the best network architecture.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhai, Qihang; Li, Yan; Zhang, Zilin; Li, Yunjie; Wang, Shafei
Adaptive feature extraction and fine-grained modulation recognition of multi-function radar under small sample conditions Journal Article
In: IET Radar, Sonar & Navigation, vol. n/a, no. n/a, 0000.
@article{https://doi.org/10.1049/rsn2.12273,
title = {Adaptive feature extraction and fine-grained modulation recognition of multi-function radar under small sample conditions},
author = {Qihang Zhai and Yan Li and Zilin Zhang and Yunjie Li and Shafei Wang},
url = {https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/rsn2.12273},
doi = {https://doi.org/10.1049/rsn2.12273},
journal = {IET Radar, Sonar & Navigation},
volume = {n/a},
number = {n/a},
abstract = {Abstract Multi-function radars (MFRs) are sophisticated sensors with fine-grained modes, which modify their modulation types and parameters range generating various signals to fulfil different tasks, such as surveillance and tracking. In electromagnetic reconnaissance, recognition of MFR fine-grained modes can provide a basis for analysing strategies and reaction. With the limit of real applications, it is hard to obtain a large number of labelled samples for existing methods to learn the difference between categories. Therefore, it is essential to develop new methods to extract general knowledge of MFRs and identify modes with only a few samples. This paper proposes a few-shot learning (FSL) framework based on efficient neural architecture search (ENAS) with high robustness and portability, which designs a suitable network structure automated and quickly adapts to new environments. The experimental results show that the proposed method can still achieve excellent fine-grained modulation recognition performance (92.6%) under the condition of -6 dB signal-to-noise ratio (SNR), even when each class only provides one fixed-duration signal sample. The robustness is also verified under different conditions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Wang, Ye-Qun; Li, Jian-Yu; Chen, Chun-Hua; Zhang, Jun; Zhan, Zhi-Hui
In: CAAI Transactions on Intelligence Technology, vol. n/a, no. n/a, 0000.
@article{https://doi.org/10.1049/cit2.12106,
title = {Scale adaptive fitness evaluation-based particle swarm optimisation for hyperparameter and architecture optimisation in neural networks and deep learning},
author = {Ye-Qun Wang and Jian-Yu Li and Chun-Hua Chen and Jun Zhang and Zhi-Hui Zhan},
url = {https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/cit2.12106},
doi = {https://doi.org/10.1049/cit2.12106},
journal = {CAAI Transactions on Intelligence Technology},
volume = {n/a},
number = {n/a},
abstract = {Abstract Research into automatically searching for an optimal neural network (NN) by optimisation algorithms is a significant research topic in deep learning and artificial intelligence. However, this is still challenging due to two issues: Both the hyperparameter and architecture should be optimised and the optimisation process is computationally expensive. To tackle these two issues, this paper focusses on solving the hyperparameter and architecture optimization problem for the NN and proposes a novel light-weight scale-adaptive fitness evaluation-based particle swarm optimisation (SAFE-PSO) approach. Firstly, the SAFE-PSO algorithm considers the hyperparameters and architectures together in the optimisation problem and therefore can find their optimal combination for the globally best NN. Secondly, the computational cost can be reduced by using multi-scale accuracy evaluation methods to evaluate candidates. Thirdly, a stagnation-based switch strategy is proposed to adaptively switch different evaluation methods to better balance the search performance and computational cost. The SAFE-PSO algorithm is tested on two widely used datasets: The 10-category (i.e., CIFAR10) and the 100−category (i.e., CIFAR100). The experimental results show that SAFE-PSO is very effective and efficient, which can not only find a promising NN automatically but also find a better NN than compared algorithms at the same computational cost.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Hu, Weifei; jinyi Shao,; Jiao, Qing; Wang, Chuxuan; Cheng, Jin; Liu, Zhenyu; Tan, Jianrong
A new differentiable architecture search method for optimizing convolutional neural networks in the digital twin of intelligent robotic grasping Journal Article
In: Journal of Intelligent Manufacturing, 0000.
@article{nokey,
title = {A new differentiable architecture search method for optimizing convolutional neural networks in the digital twin of intelligent robotic grasping},
author = {Weifei Hu and jinyi Shao and Qing Jiao and Chuxuan Wang and Jin Cheng and Zhenyu Liu and Jianrong Tan},
url = {https://link.springer.com/article/10.1007/s10845-022-01971-8},
journal = { Journal of Intelligent Manufacturing},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Xue, Fanghui
Relaxation and Optimization for Automated Learning of Neural Network Architectures DISSERTATION PhD Thesis
0000.
@phdthesis{XuePHD2022,
title = {Relaxation and Optimization for Automated Learning of Neural Network Architectures DISSERTATION},
author = {Fanghui Xue},
url = {https://escholarship.org/content/qt3wt239sm/qt3wt239sm.pdf},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Mokhtari, Nassim; Nédélec, Alexis; Gilles, Marlene; Loor, Pierre De
Improving Neural Architecture Search by Mixing a FireFly algorithm with a Training Free Evaluation Technical Report
0000.
@techreport{mokhtariimproving,
title = {Improving Neural Architecture Search by Mixing a FireFly algorithm with a Training Free Evaluation},
author = {Nassim Mokhtari and Alexis Nédélec and Marlene Gilles and Pierre De Loor},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Duggal, Rahul
0000.
@phdthesis{DuggalPhD22,
title = {Robust Efficient Edge AI: New Principles and Frameworks for Empowering Artifical Intelligence on Edge Device ON EDGE DEVICES},
author = {Rahul Duggal},
url = {https://smartech.gatech.edu/bitstream/handle/1853/67315/DUGGAL-DISSERTATION-2022.pdf?sequence=1},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Wen, Hao; Kang, Jingsu
Searching for Effective Neural Network Architectures for Heart Murmur Detection from Phonocardiogram Technical Report
0000.
@techreport{WenTS22,
title = {Searching for Effective Neural Network Architectures for Heart Murmur Detection from Phonocardiogram},
author = {Hao Wen and Jingsu Kang},
url = {https://cinc.org/2022/Program/accepted/130_Preprint.pdf},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Loni, Mohammad
Efficient Design of Scalable Deep Neural Networks for Resource-Constrained Edge Devices PhD Thesis
0000.
@phdthesis{LoniPhD,
title = {Efficient Design of Scalable Deep Neural Networks for Resource-Constrained Edge Devices},
author = {Mohammad Loni},
url = {https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1695852&dswid=-3791},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Wenfeng Feng and, Xin Zhang; Song, Qiushuang; Sun, Guoying
Incoherence of Deep Isotropic Neural Networks increase their performance on Image Classification Technical Manual
0000.
@manual{nokey,
title = {Incoherence of Deep Isotropic Neural Networks increase their performance on Image Classification},
author = { Wenfeng Feng and, Xin Zhang and Qiushuang Song and Guoying Sun
},
url = {https://www.preprints.org/manuscript/202210.0092/v1},
keywords = {},
pubstate = {published},
tppubtype = {manual}
}
Malkova, Aleksandra; Amini, Massih-Reza; Denis, Benoit; Villien, Christophe
Radio Map Reconstruction with Deep Neural Networks in a Weakly Labeled Learning Context with use of Heterogeneous Side Information Technical Manual
0000.
@manual{Malkova2022,
title = {Radio Map Reconstruction with Deep Neural Networks in a Weakly Labeled Learning Context with use of Heterogeneous Side Information},
author = {Aleksandra Malkova and Massih-Reza Amini and Benoit Denis and Christophe Villien},
url = {https://hal.archives-ouvertes.fr/hal-03823629/document},
keywords = {},
pubstate = {published},
tppubtype = {manual}
}
Aboalam, Kawther; Neuswirth, Christoph; Pernau, Florian; Schiebel, Stefan; Spaethe, Fabian; Strohrmann, Manfred
Image Processing and Neural Network Optimization Methods for Automatic Visual Inspection Technical Manual
0000.
@manual{Aboalam2022,
title = {Image Processing and Neural Network Optimization Methods for Automatic Visual Inspection},
author = {Kawther Aboalam and Christoph Neuswirth and Florian Pernau and Stefan Schiebel and Fabian Spaethe and Manfred Strohrmann},
url = {https://www.researchgate.net/profile/Christoph-Reich/publication/364343172_Artificial_Intelligence_--_Applications_in_Medicine_and_Manufacturing_--_The_Upper_Rhine_Artificial_Intelligence_Symposium_UR-AI_2022/links/634cfa3476e39959d6c8bfb2/Artificial-Intelligence--Applications-in-Medicine-and-Manufacturing--The-Upper-Rhine-Artificial-Intelligence-Symposium-UR-AI-2022.pdf#page=33},
keywords = {},
pubstate = {published},
tppubtype = {manual}
}
Mishra, Vidyanand; Kane, Lalit
A survey of designing convolutional neural network using evolutionary algorithms Journal Article
In: Artificial Intelligence Review, 0000.
@article{Mishra-AIR2022,
title = {A survey of designing convolutional neural network using evolutionary algorithms},
author = {Vidyanand Mishra and Lalit Kane
},
url = {https://link.springer.com/article/10.1007/s10462-022-10303-4},
journal = { Artificial Intelligence Review},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Sun, Yanan; Yen, Gary G.; Zhang, Mengjie
Evolutionary Deep Neural Architecture Search: Fundamentals, Methods, and recent AdvanceYa Book
0000.
@book{SunEDA22,
title = {Evolutionary Deep Neural Architecture Search: Fundamentals, Methods, and recent AdvanceYa},
author = {Yanan Sun and Gary G. Yen and Mengjie Zhang},
url = {https://books.google.de/books?hl=de&lr=&id=2RWbEAAAQBAJ&oi=fnd&pg=PR5&dq=%22neural+architecture+search%22&ots=yjnrR-vqyW&sig=0KFGVSnhQWTc1sQmWWewvmeuGqw#v=onepage&q=%22neural%20architecture%20search%22&f=false},
keywords = {},
pubstate = {published},
tppubtype = {book}
}
Park, Minje
Proxy Data Generation for Fast and Efficient Neural Architecture Search Journal Article
In: Journal of Electrical Engineering & Technology 2022, 0000.
@article{ParkJEET22,
title = {Proxy Data Generation for Fast and Efficient Neural Architecture Search},
author = {
Minje Park
},
url = {https://link.springer.com/article/10.1007/s42835-022-01321-x},
journal = {Journal of Electrical Engineering & Technology 2022},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Singh, Yeshwant; Biswas, Anupam
In: Expert Systems, vol. n/a, no. n/a, pp. e13241, 0000.
@article{https://doi.org/10.1111/exsy.13241,
title = {Lightweight convolutional neural network architecture design for music genre classification using evolutionary stochastic hyperparameter selection},
author = {Yeshwant Singh and Anupam Biswas},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/exsy.13241},
doi = {https://doi.org/10.1111/exsy.13241},
journal = {Expert Systems},
volume = {n/a},
number = {n/a},
pages = {e13241},
abstract = {Abstract Convolutional neural networks (CNNs) have succeeded in various domains, including music information retrieval (MIR). Music genre classification (MGC) is one such task in the MIR that has gained attention over the years because of the massive increase in online music content. Accurate indexing and automatic classification of these large volumes of music content require high computational resources, which pose a significant challenge to building a lightweight system. CNNs are a popular deep learning-based choice for building systems for MGC. However, finding an optimal CNN architecture for MGC requires domain knowledge both in CNN architecture design and music. We present MGA-CNN, a genetic algorithm-based approach with a novel stochastic hyperparameter selection for finding an optimal lightweight CNN-based architecture for the MGC task. The proposed approach is unique in automating the CNN architecture design for the MGC task. MGA-CNN is evaluated on three widely used music datasets and compared with seven peer rivals, which include three automatic CNN architecture design approaches and four manually designed popular CNN architectures. The experimental results show that MGA-CNN surpasses the peer approaches in terms of classification accuracy, parameter numbers, and execution time. The optimal architectures generated by MGA-CNN also achieve classification accuracy comparable to the manually designed CNN architectures while spending fewer computing resources.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Gupta, Pritha; Drees, Jan Peter; Hüllermeier, Eyke
Automated Side-Channel Attacks using Black-Box Neural Architecture Search Technical Report
0000.
@techreport{Gupta22,
title = {Automated Side-Channel Attacks using Black-Box Neural Architecture Search},
author = {Pritha Gupta and Jan Peter Drees and Eyke Hüllermeier},
url = {https://eprint.iacr.org/2023/093.pdf},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Pandelea, Vlad; Ragusa, Edoardo; Gastaldo, Paolo; Cambria, Erik
SELECTING LANGUAGE MODELS FEATURES VIA SOFTWARE-HARDWARE CO-DESIGN Miscellaneous
0000.
@misc{Pandelea23,
title = {SELECTING LANGUAGE MODELS FEATURES VIA SOFTWARE-HARDWARE CO-DESIGN},
author = {Vlad Pandelea and Edoardo Ragusa and Paolo Gastaldo and Erik Cambria },
url = {https://w.sentic.net/selecting-language-models-features-via-software-hardware-co-design.pdf},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}
Huynh, Lam
FROM 3D SENSING TO DENSE PREDICTION PhD Thesis
0000.
@phdthesis{HuynhPhD23,
title = {FROM 3D SENSING TO DENSE PREDICTION},
author = {Lam Huynh},
url = {http://jultika.oulu.fi/files/isbn9789526235165.pdf},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Cho, Minsu
Deep Learning Model Design Algorithms for High-Performing Plaintext and Ciphertext Inference PhD Thesis
0000.
@phdthesis{ChoPHD23,
title = {Deep Learning Model Design Algorithms for High-Performing Plaintext and Ciphertext Inference},
author = {Minsu Cho},
url = {https://www.proquest.com/docview/2767241424?pq-origsite=gscholar&fromopenview=true},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Zhou, Dongzhan
Designing Deep Model and Training Paradigm for Object Perception PhD Thesis
0000.
@phdthesis{ZhouPhD2023,
title = {Designing Deep Model and Training Paradigm for Object Perception},
author = {Zhou, Dongzhan
},
url = {https://ses.library.usyd.edu.au/handle/2123/31055},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Shariatzadeh, Seyed Mahdi; Fathy, Mahmood; Berangi, Reza
Improving the accuracy and speed of fast template-matching algorithms by neural architecture search Journal Article
In: Expert Systems, vol. n/a, no. n/a, pp. e13358, 0000.
@article{https://doi.org/10.1111/exsy.13358,
title = {Improving the accuracy and speed of fast template-matching algorithms by neural architecture search},
author = {Seyed Mahdi Shariatzadeh and Mahmood Fathy and Reza Berangi},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/exsy.13358},
doi = {https://doi.org/10.1111/exsy.13358},
journal = {Expert Systems},
volume = {n/a},
number = {n/a},
pages = {e13358},
abstract = {Abstract Neural architecture search can be used to find convolutional neural architectures that are precise and robust while enjoying enough speed for industrial image processing applications. In this paper, our goal is to achieve optimal convolutional neural networks (CNNs) for multiple-templates matching for applications such as licence plates detection (LPD). We perform an iterative local neural architecture search for the models with minimum validation error as well as low computational cost from our search space of about 32 billion models. We describe the findings of the experience and discuss the specifications of the final optimal architectures. About 20-times error reduction and 6-times computational complexity reduction is achieved over our engineered neural architecture after about 500 neural architecture evaluation (in about 10 h). The typical speed of our final model is comparable to classic template matching algorithms while performing more robust and multiple-template matching with different scales.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yang, Yongjia; Zhan, Jinyu; Jiang, Wei; Jiang, Yucheng; Yu, Antai
Neural architecture search for resource constrained hardware devices: A survey Journal Article
In: IET Cyber-Physical Systems: Theory & Applications, vol. n/a, no. n/a, 0000.
@article{https://doi.org/10.1049/cps2.12058,
title = {Neural architecture search for resource constrained hardware devices: A survey},
author = {Yongjia Yang and Jinyu Zhan and Wei Jiang and Yucheng Jiang and Antai Yu},
url = {https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/cps2.12058},
doi = {https://doi.org/10.1049/cps2.12058},
journal = {IET Cyber-Physical Systems: Theory & Applications},
volume = {n/a},
number = {n/a},
abstract = {Abstract With the emergence of powerful and low-energy Internet of Things devices, deep learning computing is increasingly applied to resource-constrained edge devices. However, the mismatch between hardware devices with low computing capacity and the increasing complexity of Deep Neural Network models, as well as the growing real-time requirements, bring challenges to the design and deployment of deep learning models. For example, autonomous driving technologies rely on real-time object detection of the environment, which cannot tolerate the extra latency of sending data to the cloud, processing and then sending the results back to edge devices. Many studies aim to find innovative ways to reduce the size of deep learning models, the number of Floating-point Operations per Second, and the time overhead of inference. Neural Architecture Search (NAS) makes it possible to automatically generate efficient neural network models. The authors summarise the existing NAS methods on resource-constrained devices and categorise them according to single-objective or multi-objective optimisation. We review the search space, the search algorithm and the constraints of NAS on hardware devices. We also explore the challenges and open problems of hardware NAS.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yan, Longhao; Wu, Qingyu; Li, Xi; Xie, Chenchen; Zhou, Xilin; Li, Yuqi; Shi, Daijing; Yu, Lianfeng; Zhang, Teng; Tao, Yaoyu; Yan, Bonan; Zhong, Min; Song, Zhitang; Yang, Yuchao; Huang, Ru
In: Advanced Functional Materials, vol. n/a, no. n/a, pp. 2300458, 0000.
@article{https://doi.org/10.1002/adfm.202300458,
title = {Neural Architecture Search with In-Memory Multiply–Accumulate and In-Memory Rank Based on Coating Layer Optimized C-Doped Ge2Sb2Te5 Phase Change Memory},
author = {Longhao Yan and Qingyu Wu and Xi Li and Chenchen Xie and Xilin Zhou and Yuqi Li and Daijing Shi and Lianfeng Yu and Teng Zhang and Yaoyu Tao and Bonan Yan and Min Zhong and Zhitang Song and Yuchao Yang and Ru Huang},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/adfm.202300458},
doi = {https://doi.org/10.1002/adfm.202300458},
journal = {Advanced Functional Materials},
volume = {n/a},
number = {n/a},
pages = {2300458},
abstract = {Abstract Neural architecture search (NAS), as a subfield of automated machine learning, can design neural network models with better performance than manual design. However, the energy and time consumptions of conventional software-based NAS are huge, hindering its development and applications. Herein, 4 Mb phase change memory (PCM) chips are first fabricated that enable two key in-memory computing operations—in-memory multiply-accumulate (MAC) and in-memory rank for efficient NAS. The impacts of the coating layer material are systematically analyzed for the blade-type heating electrode on the device uniformity and in turn NAS performance. The random weights in the searched network architecture can be fine-tuned in the last stage. With 512 × 512 arrays based on 40 nm CMOS process, the PCM-based NAS has achieved 25–53× smaller model size and better performance than manually designed networks and improved the energy and time efficiency by 4779× and 123×, respectively, compared with NAS running on graphic processing unit (GPU). This work can expand the hardware accelerated in-memory operators, and significantly extend the applications of in-memory computing enabled by nonvolatile memory in advanced machine learning tasks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Addad, Youva; ad Frédéric Jurie, Alexis Lechervy
Multi-Exit Resource-Efficient Neural Architecture for Image Classification with Optimized Fusion Block Technical Report
0000.
@techreport{Addad-hal23a,
title = {Multi-Exit Resource-Efficient Neural Architecture for Image Classification with Optimized Fusion Block},
author = {Youva Addad and Alexis Lechervy ad Frédéric Jurie},
url = {https://hal.science/hal-04181149/document},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Tomczak, Nathaniel; Kuppannagari, Sanmukh
Automated Indexing Of TEM Diffraction Patterns Using Machine Learning Technical Report
0000.
@techreport{Tomczak-ieee-hpec23a,
title = {Automated Indexing Of TEM Diffraction Patterns Using Machine Learning},
author = {Nathaniel Tomczak and Sanmukh Kuppannagari},
url = {https://ieee-hpec.org/wp-content/uploads/2023/09/143.pdf},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}