Skip to main content

LightGBM and Kaggle's Mercari Price Suggestion Challenge


Another post starts with you beautiful people!
I hope you have enjoyed and must learnt something from previous two posts about real world machine learning problems in Kaggle.
As I said earlier Kaggle is a great platform to apply your machine learning skills and enhance your knowledge; today I will share again my learning from there with all of you!

In this post we will work upon an online machine learning competition where we need to predict the the price of products for Japan’s biggest community-powered shopping app. The main attraction of this challenge is that this is a Kernels-only competition; it means the datasets are given for downloading only in stage 1.In next final stage it will be available only in Kernels.

What kind of problem is this? Since our goal is to predict the price (which is a number), it will be a regression problem.

Data: You can see the datasets here

Exploring the datasets: The datasets provided are in the zip format of 'tsv'. So how can we read such data? Pandas has the answer of this!

#loading the dataset
Here I have used 'c' as engine parameter value because the 'c' engine is faster than the python engine and in competition speed really matters.

#peek of the training dataset

The test dataset has all the columns mentioned in training dataset except the target variable-'price'.

#Checking for missing data



From above it is quite clear that there are lots of missing data in two columns of the datasets which cannot be ignored and should be handled with care. Our next step will be to fill the missing values.

Approach to handle category name:First we will see the category_name. If you remember any ecommerce app, you will notice that the category name is almost same in all of the them. It is in the format of- Root Category/Category/Subcategory. In the given dataset also it is following the same trend so we need to split the category and save each of them in a separate column. For this splitting 'lambda function' is quite useful and I used the same to apply my logic.Here is a snippet-

#splitting of category_name

Here I have not given any column name as 'category' because pandas has this identifier and it will create issue if we use the same name.

#filling missing values in categories

Since most of the machine learning model do not accept categorical variable, we need to convert categorical to numeric ones or pandas category.
#converting categorical variables into pandas category data type
In this problem our target variable is 'price' and when I analyzed this column , I found that there are some products which have zero price but they are not in a great number. So I decided to remove zero priced products from the training dataset.
#remove zero priced products

#combine the datasets and separate the target variables

Next, one of the most important process is to handle the texts in name, category, brand name and description of the products.To deal with this we have a powerful package- sklearn. Using this package first we work on name and category columns and will convert them to a matrix of token counts which will give us a sparse representation of the counts-

For handling the description column we will convert it to a matrix of TF-IDF features which is equivalent to CountVectorizer followed by TfidfVectorizer-

To deal with brand name we will convert multi-class labels to binary labels in a one-vs-all fashion-

Next, we will convert categorical variable-item_condition_id and shipping into dummy/indicator variables and then merge them .For the efficient merging we will use Compressed Sparse Row matrix [CSR]-

Finally we have cleaned variables, next we will convert the matrix to compressed Sparse Row format stack arrays in sequence horizontally (column wise)-

Now we have cleaned data and we are ready for modeling.
For fitting our model I have used sklearn.linear_model.Ridge-

Next, we will split the training dataset so that we don't overfit our model-

The most important part of the modeling is the training and for this I have chosen a fast, distributed, high performance gradient boosting (GBDT, GBRT, GBM or MART) framework-LightGBM

For more details of this framework please read official LightGBM

With above approach I submitted my result in kaggle and find myself under top 16%-

So what I have learnt from various competitions is that obtaining a very good score and ranking depend on two things- first is the EDA of the data and second is the machine learning model with fine parameter tuning.

For parameter tuning I found a very good article here- lightgbm parameter tuning

If you are interested the whole code you can find it here submission-to-mercari-price-suggestion-challenge.
I suggest you to please download the code, analyze the data more, do some parameter tuning and improve the score more.

Meanwhile Friends! Go chase your dreams, have an awesome day, make every second count and see you later in my next post.

Comments

Post a Comment

Popular posts from this blog

How to use opencv-python with Darknet's YOLOv4?

Another post starts with you beautiful people 😊 Thank you all for messaging me your doubts about Darknet's YOLOv4. I am very happy to see in a very short amount of time my lovely aspiring data scientists have learned a state of the art object detection and recognition technique. If you are new to my blog and to computer vision then please check my following blog posts one by one- Setup Darknet's YOLOv4 Train custom dataset with YOLOv4 Create production-ready API of YOLOv4 model Create a web app for your YOLOv4 model Since now we have learned to use YOLOv4 built on Darknet's framework. In this post, I am going to share with you how can you use your trained YOLOv4 model with another awesome computer vision and machine learning software library-  OpenCV  and of course with Python 🐍. Yes, the Python wrapper of OpenCV library has just released it's latest version with support of YOLOv4 which you can install in your system using below command- pip install opencv-pyt...

How to convert your YOLOv4 weights to TensorFlow 2.2.0?

Another post starts with you beautiful people! Thank you all for your overwhelming response in my last two posts about the YOLOv4. It is quite clear that my beloved aspiring data scientists are very much curious to learn state of the art computer vision technique but they were not able to achieve that due to the lack of proper guidance. Now they have learnt exact steps to use a state of the art object detection and recognition technique from my last two posts. If you are new to my blog and want to use YOLOv4 in your project then please follow below two links- How to install and compile Darknet code with GPU? How to train your custom data with YOLOv4? In my  last post we have trained our custom dataset to identify eight types of Indian classical dance forms. After the model training we have got the YOLOv4 specific weights file as 'yolo-obj_final.weights'. This YOLOv4 specific weight file cannot be used directly to either with OpenCV or with TensorFlow currently becau...

How can I make a simple ChatBot?

Another post starts with you beautiful people! It has been a long time of posting a new post. But my friends in this period I was not sitting  where I got a chance to work with chatbot and classification related machine learning problem. So in this post I am going to share all about chatbot- from where I have learned? What I have learned? And how can you build your first bot? Quite interesting right! Chatbot is a program that can conduct an intelligent conversation based on user's input. Since chatbot is a new thing to me also, I first searched- is there any Python library available to start with this? And like always Python has helped me this time also. There is a Python library available with name as  ChatterBot   which is nothing but a machine learning conversational dialog engine. And yes that is all I want to start my learning because I always prefer inbuilt Python library to start my learning journey and once I learn this then only I move ahead for another...