Most work on neural architecture search (NAS, see our recent survey) solely optimizes for one criterion: high performance (measured in terms of accuracy). This often results in large and complex network architectures that cannot be used in real-world applications with several other important criteria including memory requirement, energy consumption and latency. The other problem in many traditional NAS methods (such as evolutionary, RL or Bayesian Optimization) is that they require enormous computational costs: running these methods entails training hundreds to thousands of neural networks from scratch, making the development phase very expensive and limiting research on them to a few research groups with massive compute resources.
In our ICLR 2019 paper Efficient Multi-Objective Neural Architecture Search via Lamarckian Evolution, we proposed an algorithm called LEMONADE (Lamarckian Evolutionary for Multi-Objective Neural Architecture DEsign) to deal with both of these problems. On a high level, LEMONADE is a simple evolutionary algorithm for optimizing multiple objectives such as accuracy, memory requirements and latency. (more…)