Learning

Training generic ML models

MLatom allows to train kernel ridge regression (KRR) models for any generic data set with input vectors X and reference labels Y. A range of kernel functionals are supported. Instead of using this option, it may be more convenient to use one of the popular ML models available in MLatom.

Required arguments

Below are required arguments but typically more options are needed, e.g., for choosing a molecular descriptor and algorithm hyperparameters, as shown later.

  • createMLmodel

    • requests training an ML model.

      Currently only KRR models are supported.

  • XYZfile=[input file with XYZ coordinates] or XfileIn=[input file with input vectors X]

    • one and only one of these two options can be chosen.

      No default file names.

    • XYZfile: requests to train on a data set with many molecules provided in file with their XYZ coordinates. The units of coordinates are arbitrary, but many simulations with MLatom require Å which are recommended.

      XfileIn: requests to train on a data set with many input vectors (one input vector per line in text file), which are typically molecular descriptors.

  • Yfile=[input file with reference values] and/or YgradXYZfile=[input file with reference XYZ gradients]

    • one or both of these two options can be chosen.

      No default file names.

    • Yfile are often energies, it is recommended to use Hartree if the model is intended to be used in further simulations.

      YgradXYZfile are often energy gradients, it is recommended to use Hartree/Å. Note that gradients are negative forces and appropriate sign should be used. Also, note that sparse gradients can be provided, where for geometries without gradients, YgradXYZfile file should contain ‘0’ followed by a blank line (see tutorial).

  • MLmodelOut=[output file with trained model]

    • no default file name.

    • saves model to a user-defined file, commonly with .unf extension. If the file already exists, MLatom will not overwrite it and stop.

Molecular descriptor arguments

If the user only provides XYZ file with XYZfile argument, XYZ coordinates need to be first converted into the molecular descriptor.

  • molDescriptor=[molecular descriptor]

    • RE [default] (relative-to-equilibrium)

      CM (Coulomb matrix)

      ID (inverse internuclear distances)

    • RE descriptor is well-suited for accurate descriptioin of single-molecule PES.

      CM is a popular (but somewhat outdated) descriptor which can in principle be also applied to different molecules. In MLatom, full CM (vectorized) is used, not its eigenvalues as in original publication.

      ID is a popular inverse internuclear distances descriptor used in many ML models, applicable to a single-molecular PES and similar to RE descriptor.

  • molDescrType=[type of molecular descriptor]

    • unsorted [default for RE]

      sorted [default for CM]

      permuted (optional, can be used for both RE and CM)

    • unsorted descriptors are original descriptors, but they do not ensure permutational invariance of homonuclear atoms.

      sorted descriptors ensure permutational invariance and is typically used for CM descriptor (where CM is sorted by its norms). In case of RE descriptor, sorting is done by nuclear repulsions. It can be used for structure-based sampling, but introduces discontinueities in interpolant and should not be used for simulations. Related options: XYZsortedFileOut, permInvGroups, permInvNuclei. See also related tutorial.

      permuted augments the descriptor with the permutations of user-defined atoms. Related arguments: permInvKernel, permInvGroups, permInvNuclei. See also related tutorial.

  • XYZsortedFileOut=[output file with with sorted XYZ coordinates]

    • optional.

      Only works with molDescriptor=RE molDescrType=sorted.

    • saves file with XYZ coordinates after sorting chosen atoms by the nuclear repulsionsSorts chosen atoms by nuclear repulsion and prints out

  • permInvNuclei=[permutationally invariant nuclei]

    • optional.

      Should be used with molDescrType=permuted (and often with permInvKernel)

    • E.g. permInvNuclei=2-3.5-6 will permute atoms 2,3 and 6,7. See also related tutorial.

  • permInvGroups=[permutationally invariant groups]

    • optional.

      Should be used with molDescrType=permuted (and often with permInvKernel)

    • E.g. for water dimer permInvGroups=1,2,3-4,5,6 generates permuted atom indices by flipping the monomers in a dimer.

Additional output arguments

  • YestFile=[output file with estimated Y values]

    • this argument is optional and no default parameters are provided.

    • makes predictions Y for the entire data set with the trained model and saves them to the requested file. If a file with the same name already exists, program will terminate and not overwrite it.

  • YgradXYZestFile=[output file with estimated XYZ gradients]

    • this argument is optional and no default parameters are provided.

    • should be used only with XYZfile option. Calculates first XYZ derivatives for the entire data set with the trained model and saves them to the requested file. If a file with the same name already exists, program will terminate and not overwrite it.

  • YgradEstFile=[output file with estimated gradients]

    • this argument is optional and no default parameters are provided.

    • should be used only with XfileIn option. Calculates first derivatives for the entire data set with the trained model and saves them to the requested file. If a file with the same name already exists, program will terminate and not overwrite it.

Example

Here we show how to train a simple model for the H2 dissociation curve with kernel ridge regression.

Download R_20.dat file with 20 points corresponding to internuclear distances in the H2 molecule in Å.

Download E_FCI_20.dat file with full CI energies (calculated with the aug-cc-pV6Z basis set, in Hartree) for above 20 points.

Train (option createMLmodel) ML model and save it to a file (option MLmodelOut=mlmod_E_FCI_20_overfit.unf) using above data (training set) and the following command requesting fitting with the kernel ridge regression, and Gaussian kernel function and the hyperparameters σ=10-11 and λ=0:

mlatom createMLmodel MLmodelOut=mlmod_E_FCI_20_overfit.unf XfileIn=R_20.dat Yfile=E_FCI_20.dat kernel=Gaussian sigma=0.00000000001 lambda=0.0 sampling=none > create_E_FCI_20_overfit.out

In the output file create_E_FCI_20_overfit.out you can see that the error for the created ML model is essentially zero for the training set. Option sampling=none ensures that the order of training points remains the same as in the original data set (it does not matter for creating this ML model, but will be useful later). You can use the created ML model (options useMLmodel MLmodelIn) for calculating energies for its own training set and save them to E_ML_20_overfit.dat file:

mlatom useMLmodel MLmodelIn=mlmod_E_FCI_20_overfit.unf XfileIn=R_20.dat YestFile=E_ML_20_overfit.dat debug > use_E_FCI_20_overfit.out

Now you can compare the reference FCI values with the ML predicted values and see that they are the same. Option debug also prints the values of the regression coefficients alpha to the output file use_E_FCI_20_overfit.out. You can compare them with the reference FCI energies and see that they are exactly the same (they are given in the same order as the training points).

Now try to calculate energy with the ML model for any other internuclear distance not present in the training set and see that predictions are zero. It means that the ML model is overfitted and cannot generalize well to new situations, because of the hyperparameter choice. Thus, optimization of hyperparameters is strongly recommended.

Optimizing hyperparameters

It is often desirable/necessary to optimize hyperparameters, although many models may have reasonable hyperparameters and/or by default optimize their hyperparameters. There are two main different ways to optimize hyperparameters with MLatom described below:

  1. grid search for KRR models (including KREG & KRR-CM),

  2. optimization with hyperopt. Grid search is applicable for small number of hyperparameters (one or two) and is very robust, optimization with hyperopt never gives a guarantee of finding good hyperparameters but is more flexible.

Arguments

The optimization objective is to minimize the validation error. For this, the training data set has to be split into the sub-training and validation sets.

  • minimizeError=[type of validation error to minimize]

    • RMSE [default];

    • MAE

  • Nsubtrain=[number of the sub-training points or a fraction of the training points]

    • 80% of the training set by default. If a parameter is a decimal number less than 1, then it is considered to be a fraction of the training set.

    • points can be sampled in one of the usual ways using sampling argument. By default, randomly.

  • Nvalidate=[number of the validation points or a fraction of the training points]

    • By default, the remaining points of the training set after subtracting the sub-training points. If a parameter is a decimal number less than 1, then it is considered to be a fraction of the training set.

    • points can be sampled in one of the usual ways using sampling argument. By default, randomly.

  • CVopt

    • optional.

      Related option NcvOptFolds

    • N-fold cross-validation error. By default, 5-fold cross-validation is used.

  • NcvOptFolds=[number of CV folds]

    • 5 [default].

      Can be used only with CVopt.

    • If this number is equal to the number of data points, leave-one-out cross-validation is performed.

      Only random or no sampling can be used for cross-validation.

  • LOOopt

    • optional.

    • Leave-one-out cross-validation.

      Only random or no sampling can be used.

  • iCVoptPrefOut=[prefix of files with indices for CVopt]

    • optinal.

      No default prefixes.

    • file names will include the required prefix.

  • Nuse=[N first entries of the data set file to be used]

    • 100% [default];

      optional.

    • sometimes it is useful for tests just use a part of a data set.

Grid search for kernel ridge regression models

Grid search is performed on a logarithmic grid. After the best parameters are found in the first iteration, MLatom can perform more iterations of a logarithmic grid search. This option is used only for λ and/or σ hyperparameters of KRR.

  • lgOptDepth=[depth of log search]

    • 3 [default]

    • often, depth of one or two suffices and is much faster. 3 is a safer option.

  • NlgLambda=[number of points on the logarithmic grid (base 2) optimization of lambda]

  • lgLambdaL=[lowest value of log2 λ for a logarithmic grid optimization of lambda]

  • lgLambdaH=[highest value of log2 λ for a logarithmic grid optimization of lambda]

  • NlgSigma=[number of points on the logarithmic grid (base 2) for optimization of sigma]

  • lgSigmaL=[lowest value of log2 σ for a logarithmic grid optimization of sigma]

    • 6.0 [default for kernel=Gaussian and kernel=Matern];

      5.0 [default for kernel=Laplacian and kernel=exponential]

    • used with kernel ridge regression and sigma=opt argument.

  • lgSigmaH=[highest value of log2 σ for a logarithmic grid optimization of sigma]

    • 9.0 [default for kernel=Gaussian and kernel=Matern];

      12.0 [default for kernel=Laplacian and kernel=exponential]

  • on-the-fly

    • not used by default.

      Optional.

    • on-the-fly calculation of kernel matrix elements for validation, by default it is false and those elements are stored making calculations faster

Optimization with hyperopt

Optimization with hyperopt requires installation of the hyperopt package. This package provides a general solution to the optimization problem using Bayesian methods with Tree-structured Parzen Estimator (TPE).

  • [argument name of hyperparameter to optimize, e.g., sigma]=hyperopt.uniform(lb,ub) or [argument name of hyperparameter to optimize, e.g., sigma]=hyperopt.loguniform(lb,ub) or [argument name of hyperparameter to optimize, e.g., sigma]=hyperopt.qunifrom(lb,ub,q)

    • No default values.

    • lower bound lb, and upper bound ub.

      hyperopt.uniform(lb,ub): linear search space.

      hyperopt.loguniform(lb,ub): logarithmic search space, base 2.

      hyperopt.qunifrom(lb,ub,q): discrete linear space, rounded by q.

  • hyperopt.max_evals=[maximum number of attempts]

    • 8 [default]

    • often, several hundreds or even thousands of evaluations are required.

  • hyperopt.losstype=[type of loss for several reference properties]

    • geomean [default];

      weighted (used with hyperopt.w_grad)

    • geomean uses the geometric mean of losses for different properties (typically, energies and forces).

      weighted currently only needs to define weight for forces (negative XYZ gradients)

  • hyperopt.w_grad=[weight for XYZ gradients]

    • 0.1 [default].

      Should be used with hyperopt.losstype=weighted

  • hyperopt.points_to_evaluate=[xx,xx,...],[xx,xx,...],...

    • optional, no default parameters.

    • specify initial guesses before auto-searching, each point inside a pair of square brackets should have all values to be optimized in order. these evaluations are NOT counted in max_evals.

Examples

Two typical examples:

mlatom createMLmodel XYZfile=CH3Cl.xyz Yfile=en.dat MLmodelOut=CH3Cl.unf sigma=opt kernel=Matern
mlatom estAccMLmodel XYZfile=CH3Cl.xyz Yfile=en.dat sigma=hyperopt.loguniform(4,20)

Evaluating ML models

MLatom can evaluate the ML model, i.e., estimate its generalization error. For this, the total data set should be split into the training and test sets. ML model can be either trained as usual (with a generic model or a popular model) or provided with MLmodelIn argument. If the model is trained, the user can choose the required arguments to train the model. Below, only arguments unique to this feature are given.

Also, MLatom can calculate learning curves (test error vs the number of the training set).

Arguments

  • estAccMLmodel

    • required.

    • requests estimating generalization error on the test set. This argument cannot be used together with createMLmodel or useMLmodel. ML model can be either trained as usual with a generic model or a popular model. Or ML model can be provided with MLmodelIn argument.

  • Ntrain=[number of the sub-training points or a fraction of the training points]

    • 80% of the total set by default. If a parameter is a decimal number less than 1, then it is considered to be a fraction of the total set.

    • points can be sampled in one of the usual ways using sampling argument. By default, randomly.

  • Ntest=[number of the validation points or a fraction of the training points]

    • By default, the remaining points of the total set after subtracting the training points. If a parameter is a decimal number less than 1, then it is considered to be a fraction of the total set.

    • points can be sampled in one of the usual ways using sampling argument. By default, randomly.

  • CVtest

    • optional.

      Related option NcvOptFolds.

    • N-fold cross-validation error. By default, 5-fold cross-validation is used.

  • NcvTestFolds=[number of CV folds]

    • 5 [default].

      Can be used only with CVopt.

    • if this number is equal to the number of data points, leave-one-out cross-validation is performed. Only random or no sampling can be used for cross-validation.

  • LOOtest

    • optional.

    • leave-one-out cross-validation. Only random or no sampling can be used.

  • learningCurve

    • should be used with lcNtrains argument

    • produces learning curves. This option produces the following output files in directory learningCurve:

      • results.json JSON database file with all results

      • lcy.csv CSV database file with results for values

      • lcygradxyz.csv CSV database file with results for XYZ gradients

      • lctimetrain.csv CSV database file with training timings

      • lctimepredict.csv CSV database file with prediction timings

  • lcNtrains=[N,N,N,...,N training set sizes]

    • required argument if learningCurve is requested

  • lcNrepeats=[N,N,N,...,N numbers of repeats for each Ntrain] or lcNrepeats=[N,N,N,...,N number of repeats for all Ntrains]

    • 3 [default]

    • necessary to get error bars.

  • Nuse=[N first entries of the data set file to be used]

    • 100% [default];

      optional.

    • sometimes it is useful for tests just use a part of a data set.

  • sampling=user-defined

    • optional.

      Requires arguments iTrainIn, iTestIn, and/or iCVtestPrefIn.

  • iTrainIn=[file with indices of training points]

    • optional.

      No default file names.

  • iTestIn=[file with indices of test points]

    • optional.

      No default file names.

  • iCVtestPrefIn=[prefix of files with indices for CVtest]

    • optional.

      No default file names.

  • MLmodelIn=[file with ML model]

    • optional.

      No default file names.

    • requests to read a file with ML model.

  • iTrainOut=[file with indices of training points]

    • optional.

      No default file names.

    • generates indices for the training set.

  • iTestOut=[file with indices of test points]

    • optional.

      No default file names.

    • generates indices for the test set.

  • iSubtrainOut=[file with indices of sub-training points]

    • optional.

      No default file names.

    • generates indices for the sub-training set.

  • iValidateOut=[file with indices of validation points]

    • optional.

      No default file names.

    • generates indices for the validation set.

  • iCVtestPrefOut=[prefix of files with indices for CVtest]

    • optional.

      No default file names.

    • file names will include the required prefix.

Examples

Simple example:

mlatom estAccMLmodel XYZfile=CH3Cl.xyz Yfile=en.dat sigma=opt lambda=opt

Example of learning curve:

mlatom learningCurve Yfile=y.dat XYZfile=xyz.dat kernel=Gaussian sigma=opt lambda=opt lcNtrains=100,250,500,1000,2500,5000,10000 lcNrepeats=64,32,16,8,4,2,1

With this command training set sizes listed in lcNtrains will be tested repeatedly for 64, 32, 16, 8, 4, 2, 1 time(s), respectively. All data generated (including csv reports) will be stored in the folder learningCurve under the current directory.

Δ-learning

Δ-machine learning can be used with one of the usual options. Below, arguments unique to delta-learning are described. See also a tutorial.

deltaLearn

required. Should be used with one of: createMLmodel useMLmodel MLmodelIn estAccMLmodel

Yb=[file with data obtained with baseline method]

required for both training and predictions.

Yt=[file with data obtained with target method]

required only for training.

YestT=[file with ML estimations of target method]

required for predictions.

YestFile=[file with ML corrections to baseline method]

required for predictions.

YgradXYZb=[file with baseline XYZ gradients]

optional.

YgradXYZt=[file with target XYZ gradients]

optional.

YgradXYZestT=[file with ML estimations of target XYZ gradients]

optional.

YgradXYZestFile=[file with ML corrections to baseline XYZ gradients]

optional.

Example

mlatom estAccMLmodel deltaLearn XfileIn=x.dat Yb=UHF.dat Yt=FCI.dat YestT=D-ML.dat YestFile=corr_ML.dat

Self-correction

Self-correction as described here. Can be used with one of the usual options. Below, arguments unique to self-correction are described. See also a tutorial.

selfCorrect

required. Should be used with one of: createMLmodel useMLmodel MLmodelIn estAccMLmodel

Example

mlatom estAccMLmodel selfCorrect XYZfile=xyz.dat Yfile=y.dat