Mechanics
Neural Architecture Search with Bayesian Optimization: Designing LSTM Networks to Predict Damage Evolution in Composite Laminates
Publié le
This work takes a step toward Automated Machine Learning (AutoML) by moving beyond the traditional black-box approach often seen in engineering applications of neural networks. The primary objective is to advance the understanding of Neural Architecture Search (NAS), with a specific emphasis on using Bayesian Optimization (BO) with conditional constraints to fine-tune neural architectures and hyperparameters. To the best of our knowledge, NAS frameworks remain underexplored in engineering contexts, where neural architectures are typically developed through ad hoc, manually crafted designs. In this context, Long Short-Term Memory (LSTM) recurrent neural architectures for predicting highly nonlinear delamination growth patterns in composite laminates are automatically designed using NAS. The sequential data, such as nonlinear degradation with increasing load, are generated through finite element analysis (FEA). The automated selection process produces neural network models specifically tailored to capture the complex, history-dependent nature of delamination growth. The study investigates both shallow and progressively deeper architectures through extensive systematic experiments, statistical analysis and dimensionality reduction techniques, offering insights into the inner workings of BO in optimizing neural architectures. NAS results reveal that the objective function in BO can be high irregular with no clearly defined local minima, and that deeper neural networks are not universally superior to shallower ones. Additionally, random search is used as a natural baseline to judge BO performance. The findings underscore the effectiveness of LSTM in modeling sequence-tosequence relationships in composite damage evolution, and exhibit strong potential as surrogate models, delivering near real-time predictions, a capability challenging to achieve with traditional FEA.