AI – machine learning algorithms applied to transformer diagnostics

Statistical learning has a different interpretation for each of the above-indicated algorithms.

byDr. Luiz Cheim


AI - machine learning
  1. INTRODUCTION

1.1 Dataset

The dataset employed to train the machine learning algorithms contained 24 typical transformer parameters such as nameplate data, DGA, oil quality, insulation power factor, etc. As illustrated in Table 1 and Table 2, it provides a general statistical description of each parameter for the whole dataset.

1.2 Machine learning training with 10-fold cross-validation

The training was achieved by first random partitioning the original dataset with 1,000 transformers into two subsets, in which one dataset contained data for 800 transformers (training dataset), and the remaining 200 transformers were used as validation or test dataset. The training process was a supervised learning based on a 10-fold cross-validation procedure with 3 repeats, yielding 30 output accuracies for each machine learning algorithm [2-5], with each accuracy corresponding to each fold in a given repeat process. The supervised learning was applied with the support of human experts who have analyzed the same 1,000 cases provided to the machine learning algorithms.

See the full article in PDF

Fitting of input data to desired output data using ML algorithms is called learning or training, and once the ML model is trained, it can be used for predicting output value for arbitrary inputs

 

Machine learning algorithms

The following 12 ML algorithms were trained and compared in the present work:

Linear algorithms

  1. General linear regression (logistic regression) – GLM
  2. Linear discriminant analysis – LDA

Non-linear algorithms

  1. Classification and regression trees (CART)
  2. C5.0 (a type of CART algorithm)
  3. Naïve Bayes algorithm (NB)
  4. K-nearest neighbor (KNN)
  5. Support vector machine (SVM)

Ensemble algorithms

  1. Random forest (stochastic assembly of a large number of CART algorithms)
  2. Tree bagging (tree bagging)
  3. Extreme gradient boosting machine (XGB tree)
  4. Extreme gradient boosting machine (XGB linear)
  5. Artificial neural networks (ANN – not deep learning yet)

The following section describes the meaning of statistical learning for each algorithm.

  1. Statistical learning process

Statistical learning has a different interpretation for each of the above-indicated algorithms. In linear regression, for example, the learning process is associated with the optimal search of the linear model coefficients that best correlate inputs to outputs in a given problem.

 

Log in or subscribe to read the whole article.

To read the article, subscribe and choose the option which suits you best. We offer both free and paid options, and the registration takes only a minute.
Subscribe to Transformers Magazine