Naive Bayes Classifier (Step by Step)

I’ve created these step-by-step machine learning algorith implementations in Python for everyone who is new to the field and might be confused with the different steps.

Naive Bayes is a very old statistical model with mathematical foundations.

It was found by a church minister who was intrigued about god, probability and chance’s effects in life.

You can read about the curious history of Thomas Bayes’ Bayes Theorem discovery which gave way to probabilistic statistics.

Despite it’s limitations and simplicity Naive Bayes has its advantages. It can predict data incredibly fast and accurately while producing statistical outputs that are sometimes necessary.

In this tutorial you can find out a little more about the major steps of the simplest and most straightforward Naive Bayes implementation which can be your stepping stone for more sophisticated machine learning applications.

Estimated Time

10 mins

Skill Level

Advanced

Content Sections

Course Provider

Provided by HolyPython.com

I’ve split up Naive Bayes Classifier implementation to 2 different categories here:

(Red for the actual machine learning work and black font signifies preparation phase)

Down the page I’ve also color coded the steps in a different way to group similar steps with each other.

  1. Import the relevant Python libraries
  2. Import the data
  3. Read / clean / adjust the data (if needed)
  4. Create a train / test split
  5. Create the Naive Bayes model object
  6. Fit the model
  7. Predict
  8. Evaluate the accuracy
Let’s read more about each individual step and what’s achieved with each of them:

1 Import Libraries

pandas can be useful for constructing dataframes and scikit learn is the ultimate library for simple machine learning operations, learning and practicing machine learning.

3 Read the Data

Reading data is simple but there can be important points such as: dealing with columns, headers, titles, constructing data frames etc.

5 Create the Model

Machine Learning models can be created with a very simple and straight-forward process using scikitlearn. In this case we will create an object from the Gaussian Naive Bayes Classifier module of scikitlearn.naive_bayes library.

7 Predict

Once the model is ready, predictions can be done on the test part of the data. Furthermore, I enjoy predicting foreign values that are not in the initial dataset just to observe the outcomes the model creates. .predict method is used for predictions.

2 Import the Data

We need a nice dataset that’s sensible to analyze with machine learning techniques, particularly Gaussian Naive Bayes Classifier in this case. Scikitlearn has some cool sample data as well.

4 Split the Data

Even splitting data is made easy with Scikit-learn, for this operation we will use train_test_module from scikitlearn library.

6 Fit the Model

Machine Learning models are generally fit by training data. This is the part where training of the model takes place and we will do the same for our Naive Bayes model.

8 Evaluation

Finally, scikitlearn library’s metrics module is very useful to test the accuracy of the model’s predictions. This part could be done manually as well but metrics module brings lots of functionality and simplicity to the table.

1- Importing the libraries (pandas and sklearn libraries)

First the import part for libraries:

  • pandas is imported for data frames
  • train_test_split from sklearn.model_selection makes splitting data for train and test purposes very easy and proper
  • sklearn.naive_bayes provides various Naive Bayes Classifier models
  • datasets module of sklearn has great datasets making it easy to experiment with AI & Machine Learning
  • metrics is great for evaluating the results we will get.
###Importing Libraries
from sklearn import datasets
from sklearn import metrics
from sklearn import preprocessing
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split as tts

2- Importing the data (iris dataset)

It’s time to find some data to work with. For the simplicity I will suggest using pre-included datasets library in scikitlearn. They are great for practice and everything is already taken care. So, there won’t be a complication such as missing values or invalid characters etc. while you’re learning.

Let’s import the iris dataset, it’s simple and readily available:

###Importing Dataset
iris = datasets.load_iris()

3- Reading the data (scikitlearn datasets and pandas dataframe)

Now we can get the data ready:

Pandas DataFrame class is used to construct a data frame. Data frames are very useful when working with large datasets with different titles.

In case of machine learning algorithms: you usually have feature(s) and an outcome or multiple outcomes to work with, this mean different titles and sometimes different types of data. That’s why DataFrame becomes the perfect structure to work with.

###Constructing Data Frame
data = pd.DataFrame({"sl":iris.data[:,0], "sw":iris.data[:,1], "pl":iris.data[:,2], "pw":iris.data[:,3], 'species': iris.target})
# print(data["species"])

4- Splitting the data (train_test_split module)

Here is another standard Machine Learning step.

We need to split data so that there are: 

  • training feature(s) and outcome(s) 
  • test feature(s) and test outcome(s)
The logic is to train the naive bayes machine learning model with the train split and then test the trained model with the test split.

It’s a rather simple process (step) thanks to Scikit learn’s train_test_split module.

  • I named the variables X_tr, y_tr for training and X_ts, y_ts for test input. This is up to your taste or your circumstances.
  • X_tr, X_ts will be assigned to a part of the features
  • y_tr, y_ts will be assigned to a part of outcomes
  • Split ratio can be assigned using test_size parameter. This is an important parameter and something you should experiment with to get a better understanding. 1/3rd or 30% usually are reasonable ratios.
  • Then model works on X_tr and y_tr for training.
  • Then we will test it on X_ts and y_ts to see how successful the model is.
###Splitting train/test data
x=data[['sl','sw','pl','pw']]
y=data["species"]
X_tr, X_ts, y_tr, y_ts = tts(X,y, test_size=30/100, random_state=None)

An advantage of Naive Bayes is that one doesn’t have to worry about overfitting.

5- Creating the model (naive_bayes.GaussianNB)

Now, we can create a Naive Bayes Classifier object and put machine learning to work using the training data:

There are also a number of other Naive Bayes models (i.e.: MultinomialNB) that can be valuable in different situations.

Also if there is going to be any optimizations this is the right moment to pass hyper parameters to the model during initialization.

###Creating Naive Bayes Classifier Model
GNB = GaussianNB(var_smoothing=2e-9)

6- Fitting the model (Training with features(X) and outcomes (y))

Model can be trained using the .fit method on the model we’ve just created.

###Training the Model
GNB.fit(X_tr, y_tr)

7- Making predictions (.predict method)

Now it’s time to make predictions with the trained model.

###Making Predictions
y_pr = GNB.predict(X_ts)
print(y_pr)

8- Evaluating results (scikitlearn metrics module)

Evaluating the model is quite important for validation. If the results aren’t promising you might need to tweak the settings or find a model that’s more suitable. metrics module of sklearn library is very useful in this sense.

###Evaluating Prediction Accuracy
print("Acc %:",metrics.accuracy_score(y_ts, y_pr)*100)

Bonus: Predicting foreign data

###Making Prediction with Foreign Data
print(GNB.predict([[1,1,0.5,6]]))

You can see the full one piece code in this page: Naive Bayes Simple Implementation