Online citations, reference lists, and bibliographies.

Empirical Studies Of Gaussian Process Based Bayesian Optimization Using Evolutionary Computation For Materials Informatics

Hiroshi Ohno
Published 2018 · Computer Science
Cite This
Download PDF
Analyze on Scholarcy
Abstract Using evolutionary computation, we empirically investigate convergence properties of Gaussian process based Bayesian optimization (BO). We use evolutionary computation for the learning of the prediction model and optimization of the acquisition function (auxiliary search) of BO. For practical use for materials informatics, we address three issues in BO: (1) the stopping conditions, (2) the initial data size, and (3) the unknown smoothness of the target function. Then, we introduce a goal-directed acquisition function in which a target value as a desired property of a compound is incorporated. In addition, we present an ensemble method of BO, in which each BO in the ensemble has a random property and a kernel function with a different smoothness. Experimental results for the materials data sets on melting points of binary compounds and hydrogen weight percentages of hydrogen storage materials with two to four constituent elements show the effectiveness of the ensemble method of BO. Additionally, using an ensemble of BOs presents that the obtained results (increase of the number of samples acquired) are not simply a result of additional BOs. The goal-directed acquisition function and the ensemble of BOs which we propose should be techniques that can be used in the realization of a new materials recommendation system with a self-learning algorithm. Due to the self-learning algorithm realized by BO, the property prediction performance of the algorithm would be increasingly improved.
This paper references
Hybrid Batch Bayesian Optimization
Javad Azimi (2012)
Evolutionary learning with kernels: a generic solution for large margin problems
Ingo Mierswa (2006)
Random Search for Hyper-Parameter Optimization
James Bergstra (2012)
Taking the Human Out of the Loop: A Review of Bayesian Optimization
Bobak Shahriari (2016)
A Kernel Two-Sample Test
Arthur Gretton (2012)
Completely Derandomized Self-Adaptation in Evolution Strategies
Nikolaus Hansen (2001)
Bayesian Interpolation
David J. C. MacKay (1992)
Bayesian Optimization in a Billion Dimensions via Random Embeddings
Ziyu Wang (2016)
Designing Nanostructures for Phonon Transport via Bayesian Optimization
Shenghong Ju (2016)
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
Niranjan Srinivas (2010)
Bayesian Optimization for Materials Design
Peter I. Frazier (2016)
On the analysis of the (1+1) evolutionary algorithm
Stefan Droste (2002)
Learning the Kernel Function via Regularization
Charles A. Micchelli (2005)
The art of molecular dynamics simulation
M.G.B. Drew (1996)
On the analysis of a simple evolutionary algorithm on quadratic pseudo-boolean functions
Ingo Wegener (2005)
Representation of compounds for machine-learning prediction of physical properties
Atsuto Seko (2017)
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review
Tomaso A. Poggio (2016)
Sequential Model-Based Optimization for General Algorithm Configuration
Frank Hutter (2011)
Portfolio Allocation for Bayesian Optimization
Matthew D. Hoffman (2011)
Joint Optimization and Variable Selection of High-dimensional Gaussian Processes
Bo Chen (2012)
BBOB-benchmarking the DIRECT global optimization algorithm
Petr Posík (2009)
Kriging is well-suited to parallelize optimization
David Ginsbourger (2010)
Lower bounds on the convergence rate of the Markov symmetric random search
Alexey B. Tikhomirov (2011)
Lipschitzian optimization without the Lipschitz constant
Donald R. Jones (1993)
Uniforming the dimensionality of data with neural networks for materials informatics
Hiroshi Ohno (2016)
Theory of Evolution Strategies: A New Perspective
Anne Auger (2011)
Entropy Search for Information-Efficient Global Optimization
Philipp Hennig (2012)
Sparse partial least squares regression for simultaneous dimension reduction and variable selection
Hyonho Chun (2010)
Introduction to Reinforcement Learning
Richard S. Sutton (1998)
Data mining for materials: Computational experiments with AB compounds
Yousef Saad (2012)
Linear Convergence of Comparison-based Step-size Adaptive Randomized Search via Stability of Markov Chains
Anne Auger (2016)
Kernel Partial Least Squares Regression in Reproducing Kernel Hilbert Space
Roman Rosipal (2001)
Dynamic Kernel Selection Criteria for Bayesian Optimization
Ibai Roman (2014)
Self-adaptive mutations may lead to premature convergence
Günter Rudolph (2001)
Interatomic potential construction with self-learning and adaptive database
Kazutoshi Miwa (2017)
Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting
Niranjan Srinivas (2012)
Machine learning with systematic density-functional theory calculations: Application to melting temperatures of single- and binary-component solids
Atsuto Seko (2014)
Gaussian Processes for Machine Learning
Carl Edward Rasmussen (2005)
Parallel Algorithm Configuration
Frank Hutter (2012)
Evolutionary tuning of multiple SVM parameters
Frauke Friedrichs (2004)
A Machine Learning Approach to Predict Chemical Reactions
Matthew A. Kayala (2011)
From the computer to the laboratory: materials discovery and design using first-principles calculations
Geoffroy Hautier (2012)

This paper is referenced by
Semantic Scholar Logo Some data provided by SemanticScholar