Skip to main content

Python Advanced- Visualizing the Titanic Disaster

Another post starts with you beautiful people !
Today we will work on a famous dataset Titanic Dataset taken from kaggle.
This dataset gives information about the details of the passengers aboard the Titanic and a column on survival of the passengers. Those who survived are represented as “1” while those who did not survive are represented as “0”.

The columns in the dataset are as below-
PassengerId: Passenger Identity
Survived: Whether passenger survived or not
Pclass: Class of ticket
Name: Name of passenger
Sex: Sex of passenger (Male or Female)
Age: Age of passenger
SibSp: Number of sibling and/or spouse travelling with passenger
Parch: Number of parent and/or children travelling with passenger
Ticket: Ticket number
Fare: Price of ticket
Cabin: Cabin number

Let's starts some hands on-


Let's generates descriptive statistics-






Result:





Note: if you are seeing error- ImportError: No module named 'seaborn' then it mean you need to install the seaborn library using command- pip install seaborn in the command prompt.


Result:

Let's find out the children in the dataset-


Let's count the person individually-


Now plot Male, Female, Child in Pclass-

Result:





People Who Survived and Who Didn't:




How many Male and Female survived :
                                          
Result-More females survive than males.

Let's compute pairwise correlation of columns, excluding NA/null values:-




Result:

See with the help of above visualization how you can easily transform a dataset into a story telling.
Try in your notebook and share your thoughts in comment.

Comments

Popular posts from this blog

Generative AI with LangChain: Basics

  Wishing everyone a Happy New Year '24😇 I trust that you've found valuable insights in my previous blog posts. Embarking on a new learning adventure with this latest post, we'll delve into the realm of Generative AI applications using LangChain💪. This article will initially cover the basics of Language Models and LangChain. Subsequent posts will guide you through hands-on experiences with various Generative AI use cases using LangChain. Let's kick off by exploring the essential fundamentals💁 What is a Large Language Model (LLM)? A large language model denotes a category of artificial intelligence (AI) models that undergo extensive training with extensive textual data to comprehend and produce language resembling human expression🙇. Such a large language model constitutes a scaled-up transformer model, often too extensive for execution on a single computer. Consequently, it is commonly deployed as a service accessible through an API or web interface. These models are...

How can I become a TPU expert?

Another post starts with you beautiful people! I have two good news for all of you! First good news is that Tensorflow has released it's new version (TF 2.1) which is focused on TPUs and the most interesting thing about this release is that it now also supports Keras high level API. And second wonderful news is to help us get started Kaggle has launched a TPU Playground Challenge . This means there is no any way to stop you learning & using TPUs. In this post I am going to share you how to configure and use TPUs while solving a image classification problem. What are TPUs? You must have heard about TPU while using  Google Colab . Now Kaggle also supports this hardware accelerator. TPUs or Tensor Processing Units are hardware accelerators specialized in deep learning tasks. They were created by Google and have been behind many cutting edge results in machine learning research. Kaggle Notebooks are configured with TPU v3-8s, which is a specialized hardware with...

LightGBM and Kaggle's Mercari Price Suggestion Challenge

Another post starts with you beautiful people! I hope you have enjoyed and must learnt something from previous two posts about real world machine learning problems in Kaggle. As I said earlier Kaggle is a great platform to apply your machine learning skills and enhance your knowledge; today I will share again my learning from there with all of you! In this post we will work upon an online machine learning competition where we need to predict the the price of products for Japan’s biggest community-powered shopping app. The main attraction of this challenge is that this is a Kernels-only competition; it means the datasets are given for downloading only in stage 1.In next final stage it will be available only in Kernels. What kind of problem is this? Since our goal is to predict the price (which is a number), it will be a regression problem. Data: You can see the datasets  here Exploring the datasets: The datasets provided are in the zip format of 'tsv'. So how can ...