Aperçu des sections

  • HAX912X - Generalized Linear Model / Machine Learning

  • Introductory examples

  • Mutiple linear regression

    • In R, contrasts are used to handle categorical variables (factors) in models like linear regression. Categorical variables need to be encoded numerically so they can be included in models, and contrasts define how the levels of a factor are represented as numeric values.

      When you fit a model in R with a categorical predictor (factor), R automatically creates dummy variables to represent the different levels of the factor. The type of contrast determines how these dummy variables are encoded and how comparisons between factor levels are made in the model.

      This R script demonstrates various ways of encoding categorical variables and fitting linear models using the lm() function. It also shows how to manipulate factor levels and contrasts when building regression models.

  • Overparamerization in regression

    • The provided script is a comprehensive R code snippet for simulating, analyzing, and modeling a synthetic dataset to explore linear and regularization-based regression techniques. It covers the creation of a dataset, fitting multiple models, and evaluating their performance. Below is a detailed commentary on the operations performed by each major block in the script:

      1. Setting the Seed for Reproducibility:
         - set.seed(345678) ensures that the results of random processes (like data generation) are consistent every time the script runs.

      2. Data Generation:
         - The script generates a synthetic dataset with 50 observations (`n`). It includes three normally distributed predictor variables (x1, x2, x3), a noise variable (u) with a specific variance, and a response variable (y) determined by a linear combination of these predictors plus noise.
         - Additional features (z variables) are generated uniformly, increasing the dimensionality of the dataset to 30 extra predictors.

      3. Data Scaling and Test Set Preparation:
         - The predictors are scaled to have zero mean and unit variance. This standardization is crucial for many modeling techniques, especially when regularization is involved.
         - A test dataset of 1000 observations is generated similarly to the training set.

      4. Linear Regression Modeling:
         - A linear model (model1) is fitted using all predictors on the original data, and predictions are made on the test set. Root Mean Squared Error (RMSE) is computed to evaluate model performance.
         - Another linear model (model1.scale) is fitted on the scaled data, with performance also evaluated via RMSE.

      5. Reduced Linear Regression:
         - A simpler model (model2.scale) using only x1, x2, and x3 as predictors is fitted to the scaled data and evaluated, allowing comparison against more complex models to see if additional predictors genuinely improve model performance.

      6. Oracle Model Evaluation:
         - An oracle model uses the true parameters used in generating y to calculate its predictions, serving as a baseline for the best possible performance any model could achieve.

      7. Regularization Models:
         - Lasso and Ridge regression models are fitted using the glmnet package, with a range of lambda values to find the optimal regularization strength.
         - These models are plotted and tuned using cross-validation (train function from caret package), aiming to optimize the lambda parameter based on RMSE.

      8. Model Tuning and Validation:
         - The best performing lambda values for both Lasso and Ridge models are identified, and the models are re-fit using these optimal parameters.
         - Predictions are made on the scaled test set, and performance is again assessed via RMSE.

      9. Visualization and Analysis:
         - The script includes commands to plot the coefficients of the fitted models, helping visualize how regularization affects the impact of different predictors.

      10. Advanced Techniques:
          - The use of repeated cross-validation (with parameters such as number of folds and repeats) helps ensure that the model evaluation is robust and reliable.
          - Detailed results from model tuning are displayed, providing insights into how the model performance varies with changes in lambda.

      This script is an example for illustrating practical data science tasks, including data preparation, model fitting, and performance evaluation, particularly in scenarios where the dimensionality of predictors varies significantly. It also showcases the importance of regularization techniques in preventing overfitting and improving model prediction accuracy on unseen data.

    • This Python script primarily demonstrates various regression techniques for modeling the relationship between vehicle speed and braking distance, utilizing a dataset loaded from a file named freinage.txt. Here's a breakdown of the script's key components and operations:

      1. Imports and Data Loading
          - Several Python libraries are imported, including numpy for data manipulation, sklearn for machine learning models, and matplotlib for plotting.
          - The working directory is changed to a specified path where the data file is located, and the dataset is loaded into a numpy array.

      2. Data Preparation:

          - The data is split into features (X_data) and target (Y_data).
          - StandardScaler is used to standardize the feature, which is crucial for many machine learning algorithms to perform well.

      3. Linear Regression:

          - A simple linear regression model is fit to the standardized data.
          - The fit model is used to predict the braking distance over a range of standardized speeds, and the result is plotted.

      4. Polynomial Regression:
          - Polynomial features of degrees 2 and 6 are generated from the speed data.
          - Linear regression models are fit to these polynomial features to capture more complex relationships between speed and braking distance.
          - Predictions from these models are plotted to visualize how they compare to the linear model.

      5. Ridge Regression:
          - Multiple ridge regression models with different regularization strengths (alpha) are fit using polynomial features of degree 6.
          - This regularization can help prevent overfitting by shrinking the coefficients.
          - The effectiveness of each alpha value is visualized through different plots.

      6. Cross-Validation and Model Selection:
          - Ridge regression models with polynomial features of degree 1, 2, and 6 are evaluated using cross-validation to determine the best alpha value.
          - Lasso regression is also applied with a similar approach to ridge, using LassoCV to automatically find the best alpha.
          - Performance metrics (Mean Squared Error) and optimal alpha values are printed for each model.

      7. Visualization:
          - Multiple plots are created throughout the script to visualize the data alongside the predictions from various models. This helps in comparing how well each model captures the relationship between speed and braking distance.

      8. Advanced Model Evaluation:
          - The script includes advanced techniques like cross-validation scoring and grid search for hyperparameter tuning in Lasso regression.
          - The best parameters and model coefficients are printed, providing insights into the model's performance and configuration.

      This script is a demonstration of applying and comparing multiple linear and polynomial regression models, along with regularization techniques like Ridge and Lasso, to understand and predict the relationship between vehicle speed and braking distance effectively.

  • K-Fold Cross-Validation

  • Generalized linear models

  • Logistic regression

  • Random Forests

  • Introduction to neural networks

  • Credit Card Fraud Detection

    https://www.kaggle.com/mlg-ulb/creditcardfraud

  • PyTorch