Skip to main content

Machine Learning-Logistic Regression


Another post starts with you beautiful people!
I appreciate that you have shown your interest in Machine Learning track and enjoyed my previous post about Linear Regression where we learned the concept with the case study of bike sharing system.
Today we will continue our Data Science journey and learn about Logistic Regression.
Like all regression analyses, the logistic regression is a predictive analysis.
The fact is that linear regression works on a continuum of numeric estimates. In order to classify correctly, we need a more suitable measure, such as the probability of class ownership.
Thanks to the following formula, we can transform a linear regression numeric estimate into a probability that is more apt to describe how a class fits an observation:

probability of a class = exp(r) / (1+exp(r))

  • r is the regression result (the sum of the variables weighted by the coefficients) 
  • exp is the exponential function. 
  • exp(r) corresponds to Euler’s number e elevated to the power of r. 
  • A linear regression using such a formula (also called a link function) for transforming its results into probabilities is a logistic regression.


Logistic regression is similar to linear regression, with the only difference being the y data, which should contain integer values indicating the class relative to the observation.
Agenda of this exercise-

For this exercise we will work on a very interesting dataset- Affair Dataset
In this dataset: Extramarital affair data is used to explain the allocation of an individual’s time among work, time spent with a spouse, and time spent with a paramour. The data is used as an example of regression with censored data.
It was derived from a survey of women in 1974 by Redbook magazine, in which married women were asked about their participation in extramarital affairs.


It was derived from a survey of women in 1974 by Redbook magazine, in which married women were asked about their participation in extramarital affairsDescription of Variables
The dataset contains 6366 observations of 9 variables:

Dataset description:-
  • rate_marriage: woman's rating of her marriage (1 = very poor, 5 = very good)
  • age: woman's age
  • yrs_married: number of years married
  • children: number of children
  • religious: woman's rating of how religious she is (1 = not religious, 4 = strongly religious)
  • educ: level of education (9 = grade school, 12 = high school, 14 = some college, 16 = college graduate, 17 = some graduate school, 20 = advanced degree)
  • occupation: woman's occupation (1 = student, 2 = farming/semi-skilled/unskilled, 3 = "white collar", 4 = teacher/nurse/writer/technician/skilled, 5 = managerial/business, 6 = professional with advanced degree)
  • occupation_husb: husband's occupation (same coding as above)
  • affairs: time spent in extra-marital affairs
Problem Statement:-
We will treat this as a classification problem by creating a new binary variable affair (did the woman have at least one affair?) and trying to predict the classification for each woman.

Importing the required modules-

Data Pre-Processing-
First, let's load the dataset and add a binary 'affair' column.


Data Exploration-

We can see that on average, women who have affairs rate their marriages lower, which is to be expected. 

Let's take another look at the rate_marriage variable.

It seems an increase in age, yrs_married, and children appears to correlate with a declining marriage rating.

Data Visualization-

Result-


Result-

Let's take a look at the distribution of marriage ratings for those having affairs versus those not having affairs.

Let's use a stacked barplot to look at the percentage of women having affairs by number of years of marriage.

Prepare Data for Logistic Regression-

To prepare the data, we will add an intercept column as well as dummy variables for occupation and occupation_husb, since we are treating them as categorial variables. 
The dmatrices function from the patsy module can do that using formula language.

The column names for the dummy variables are ugly, so let's rename those-

We also need to flatten y into a 1-D array, so that scikit-learn will properly understand it as the response variable.

Logistic Regression-
Let's go ahead and run logistic regression on the entire data set, and see how accurate it is!

73% accuracy seems good, but what's the null error rate?


Only 32% of the women had affairs, which means that we could obtain 68% accuracy by always predicting "no". 
So we're doing better than the null error rate, but not by much.
Let's examine the coefficients to see what we learn-


From the above output we can say-Increases in marriage rating and religiousness correspond to a decrease in the likelihood of having an affair

For both the wife's occupation and the husband's occupation, the lowest likelihood of having an affair corresponds to the baseline occupation (student), since all of the dummy coefficients are positive.

Model Evaluation Using a Validation Set-
So far, we have trained and tested on the same set. Let's instead split the data into a training set and a testing set.

We now need to predict class labels for the test set. We will also generate the class probabilities, just to take a look.
As you can see, the classifier is predicting a 1 (having an affair) any time the probability in the second column is greater than 0.5.
Now let's generate some evaluation metrics-
The accuracy is 73%, which is the same as we experienced when training and predicting on the same data.
We can also see the confusion matrix and a classification report with other metrics-

Model Evaluation Using Cross-Validation-
Now let's try 10-fold cross-validation, to see if the accuracy holds up more rigorously.

Looks good. It's still performing at 73% accuracy.So our model is ready for prediction!

Can we predict the probability of an affair using our model?
Let's predict the probability of an affair for a random woman not present in the dataset. 
Assume she's a 25-year-old house wife who graduated college, has been married for 3 years, has 1 child, rates herself as strongly religious, rates her marriage as fair, and her husband is a farmer.
From our model we can predict that probability of an affair is 23%.

Looks cool right! we can make many improvement like below to improve our model-
  • including interaction terms
  • removing features
  • regularization techniques
  • using a non-linear model
It's time to try yourself and improve our model.

In my next post I will share you When and Where to use Linear or Logistic regression?

Comments

  1. i learnt new information about data science using python which really helpful.This concept explanation are very clear so easy to understand..

    Also Check out the : https://www.credosystemz.com/training-in-chennai/best-data-science-training-in-chennai/

    ReplyDelete
    Replies
    1. Really Good blog post.provided a helpful information.I hope that you will post more updates like this Machine Learning Projects for Final Year

      Artificial Intelligence Projects For Final Year

      I think things like this are really interesting. I absolutely love to find unique places like this. It really looks super creepy though!!

      Delete
  2. Really Good blog post.provided a helpful information.I hope that you will post more updates like this Data Science online Training Hyderabad

    ReplyDelete
  3. I think things like this are really interesting. I absolutely love to find unique places like this. It really looks super creepy though!!
    Best Machine Learning Training in Chennai | best machine learning institute in chennai | Machine Learning course in chennai

    ReplyDelete
  4. Excellent article. Very interesting to read. I really love to read such a nice article. Thanks! keep rocking. Data Science online Course India

    ReplyDelete
  5. It is nice blog Thank you porovide importent information and i am searching for same information
    Tableau Online Training

    ReplyDelete
  6. This video helps me to understand Matplotlib whats your opinion guys.

    ReplyDelete
  7. Thanks for the post. It was very interesting and meaningful. I really appreciate it! Keep updating stuff like this.
    We are giving all Programming Courses such as

    Register for a free Demo Sessions

    RPA Ui Path Online Training
    Best Python Online Training
    Online AWS Training
    Online Data Science Training

    ReplyDelete
  8. I really liked your Information. Keep up the good work. Chat with Amateur Models

    ReplyDelete
  9. Extremely decent review. I totally appreciate this site. Much obliged! online news

    ReplyDelete

  10. There's definately a ton to think about this issue. I truly like all the focuses you made.
    best interiors

    ReplyDelete

Post a Comment

Popular posts from this blog

How to install and compile YOLO v4 with GPU enable settings in Windows 10?

Another post starts with you beautiful people! Last year I had shared a post about  installing and compiling Darknet YOLOv3   in your Windows machine and also how to detect an object using  YOLOv3 with Keras . This year on April' 2020 the fourth generation of YOLO has arrived and since then I was curious to use this as soon as possible. Due to my project (built on YOLOv3 :)) work I could not find a chance to check this latest release. Today I got some relief and successfully able to install and compile YOLOv4 in my machine. In this post I am going to share a single shot way to do the same in your Windows 10 machine. If your machine does not have GPU then you can follow my  previous post  by just replacing YOLOv3 related files with YOLOv4 files. For GPU having Windows machine, follow my steps to avoid any issue while building the Darknet repository. My machine has following configurations: Windows 10 64 bit Intel Core i7 16 GB RAM NVIDIA GeForce G...

How can I make a simple ChatBot?

Another post starts with you beautiful people! It has been a long time of posting a new post. But my friends in this period I was not sitting  where I got a chance to work with chatbot and classification related machine learning problem. So in this post I am going to share all about chatbot- from where I have learned? What I have learned? And how can you build your first bot? Quite interesting right! Chatbot is a program that can conduct an intelligent conversation based on user's input. Since chatbot is a new thing to me also, I first searched- is there any Python library available to start with this? And like always Python has helped me this time also. There is a Python library available with name as  ChatterBot   which is nothing but a machine learning conversational dialog engine. And yes that is all I want to start my learning because I always prefer inbuilt Python library to start my learning journey and once I learn this then only I move ahead for another...

Detecting Credit Card Fraud As a Data Scientist

Another post starts with you beautiful people! Hope you have learnt something from my previous post about  machine learning classification real world problem Today we will continue our machine learning hands on journey and we will work on an interesting Credit Card Fraud Detection problem. The goal of this exercise is to anonymize credit card transactions labeled as fraudulent or genuine. For your own practice you can download the dataset from here-  Download the dataset! About the dataset:  The datasets contains transactions made by credit cards in September 2013 by european cardholders. This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions. Let's start our analysis with loading the dataset first:- As per the  official documentation -  features V1, V2, ... V28 are the principal compo...