CoderzColumn : Machine Learning Tutorials (Page: 3)

Machine Learning Tutorials


The term 'machine learning' (ML) describes a system's capacity to gather and synthesize knowledge through extensive observation, as well as to develop and extend itself by picking up new information rather than having it preprogrammed into it. At CoderzColumn, you get a glimpse of the vast Machine Learning Field. We cover various concepts through tutorials. The concepts are:

  • Visualize ML Metrics
  • Gradient Boosted Decision Trees
  • Interpret Predictions Of ML Models
  • Hyperparameters Tuning / Optimization

For an in-depth understanding of the above concepts, check out the sections below.

Recent Machine Learning Tutorials


Tags svm, sklearn
Scikit-Learn - Support Vector Machine
Machine Learning

Scikit-Learn - Support Vector Machine

Scikit-Learn - Support Vector Machine

Sunny Solanki  Sunny Solanki
Tags sklearn, neural-network
Scikit-Learn - Neural Network
Machine Learning

Scikit-Learn - Neural Network

Scikit-Learn - Neural Network

Sunny Solanki  Sunny Solanki
Tags sklearn, outliers-detection
Scikit-Learn - Anomaly Detection [Outliers Detection]
Machine Learning

Scikit-Learn - Anomaly Detection [Outliers Detection]

Scikit-Learn - Anomaly Detection [Outliers Detection]

Sunny Solanki  Sunny Solanki
Tags sklearn, dbscan, clustering
Scikit-Learn - Clustering: Density-Based Clustering of Applications with Noise [DBSCAN]
Machine Learning

Scikit-Learn - Clustering: Density-Based Clustering of Applications with Noise [DBSCAN]

Scikit-Learn - Clustering: Density-Based Clustering of Applications with Noise [DBSCAN]

Sunny Solanki  Sunny Solanki
Tags sklearn, manifold-learning
Scikit-Learn - Non-Linear Dimensionality Reduction: Manifold Learning
Machine Learning

Scikit-Learn - Non-Linear Dimensionality Reduction: Manifold Learning

Scikit-Learn - Non-Linear Dimensionality Reduction: Manifold Learning

Sunny Solanki  Sunny Solanki
Tags sklearn, boosting
Scikit-Learn - Ensemble Learning : Boosting
Machine Learning

Scikit-Learn - Ensemble Learning : Boosting

Scikit-Learn - Ensemble Learning : Boosting

Sunny Solanki  Sunny Solanki
Tags sklearn, ensemble-learning, bagging, random-fores…
Scikit-Learn - Ensemble Learning : Bootstrap Aggregation(Bagging) & Random Forests
Machine Learning

Scikit-Learn - Ensemble Learning : Bootstrap Aggregation(Bagging) & Random Forests

Scikit-Learn - Ensemble Learning : Bootstrap Aggregation(Bagging) & Random Forests

Sunny Solanki  Sunny Solanki
Tags sklearn, decision-trees
Scikit-Learn - Decision Trees
Machine Learning

Scikit-Learn - Decision Trees

Scikit-Learn - Decision Trees

Sunny Solanki  Sunny Solanki
Tags sklearn, hierarchical-clustering
Scikit-Learn - Hierarchical Clustering
Machine Learning

Scikit-Learn - Hierarchical Clustering

Scikit-Learn - Hierarchical Clustering

Sunny Solanki  Sunny Solanki
Tags sklearn-, linear-dimensionality-reduction-pca
Scikit-Learn - Linear Dimensionality Reduction (PCA)
Machine Learning

Scikit-Learn - Linear Dimensionality Reduction (PCA)

Scikit-Learn - Linear Dimensionality Reduction (PCA)

Sunny Solanki  Sunny Solanki
Visualize Machine Learning Metrics

Visualize Machine Learning Metrics


Once our Machine Learning model is trained, we need some way to evaluate its performance. We need to know whether our model has generalized or not.

For this, various metrics (confusion matrix, ROC AUC curve, precision-recall curve, silhouette Analysis, elbow method, etc) are designed over time. These metrics help us understand the performance of our models trained on various tasks like classification, regression, clustering, etc.

Python has various libraries (scikit-learn, scikit-plot, yellowbrick, interpret-ml, interpret-text, etc) to calculate and visualize these metrics.

Interpret Predictions Of ML Models

Interpret Predictions Of ML Models


After training ML Model, we generally evaluate the performance of model by calculating and visualizing various ML Metrics (confusion matrix, ROC AUC curve, precision-recall curve, silhouette Analysis, elbow method, etc).

These metrics are normally a good starting point. But in many situations, they don’t give a 100% picture of model performance. E.g., A simple cat vs dog image classifier can be using background pixels to classify images instead of actual object (cat or dog) pixels.

In these situations, our ML metrics will give good results. But we should always be a little skeptical of model performance.

We can dive further deep and try to understand how our model is performing on an individual example by interpreting results. Various algorithms have been developed over time to interpret predictions of ML models and many Python libraries (lime, eli5, treeinterpreter, shap, etc) provide their implementation.

Hyperparameters Tuning / Optimization

Hyperparameters Tuning / Optimization


Machine Learning models generally have many parameters that need to be tuned to get the best performing model. E.g., a decision tree has parameters like tree depth, min samples per leaf, maximum leaf nodes, criteria to evaluate split, etc whose different values can be tried to get the best-performing decision tree model.

These parameters of ML models are generally referred to as Hyperparameters. Over the years, various approaches have been developed to get best performing Hyperparameters for ML Model. The process of finding best performing Hyperparameters is referred to as Hyperparameters tuning or Hyperparameters Optimization.

Python has many libraries (optuna, hyperopt, scikit-optimize, scikit-learn, etc) that let us perform Hyperparameters tuning to find best settings for our model.

Gradient Boosting

Gradient Boosting


Gradient boosting is a machine learning algorithm based on an ensemble of estimators and is used for regression and classification tasks. The ensemble consists of list of weak predictors / estimators whose predictions are combined to make final model predictions.

Majority of the time, these weak predictors are decision trees, and an algorithm is referred to as gradient-boosted trees or gradient-boosted decision trees. They are best suited for structured tabular datasets.

Python has many libraries (XGBoost, CatBoost, LightGBM, scikit-learn, etc) that provide an implementation of gradient boosting. Apart from implementation, these libraries provide many extra features like parallelization, GPU training, distributed training, command line training / evaluation, higher accuracy, etc.