Naive Bayes Algorithm in Machine Learning : A quick overview

Priya Roy Chowdhury
2 min readJul 27, 2021
Photo by Artem Sapegin on Unsplash

Naïve Bayes Algorithm is most popular and simple classification algorithm in Machine learning. It is used when we have conditional probability. That means we have to predict probability of some event based on the given probability of another event. This is based on Bayes Theorem.

Basically, it is a combination of Bayes theorem and some Naïve assumption that features are independent of each other.

The formula for Naïve Bayes theorem is:

Bayes Theorem

Where:

P(A|B) is conditional probability, Probability of A, given that B has already occurred

P(B|A) is conditional probability, Probability of B, given that A has already occurred

Some of the use cases where Naïve Bayes classification is used are

§ Spam Detection

§ Handwritten character recognition

§ Face detection

§ Weather prediction

§ News article categorization

There are many ways to Naïve Bayes Model and in python we use scikit learn library for this. We can use three types of Naïve Bayes theorem with the help of scikit learn in Python

Gaussian Naïve Bayes : In a classification problem when features are normally distributed.

Multinomial Naïve Bayes: This classifier is suitable for classification with discrete features (e.g., word counts for text classification). It is mostly used in NLP problems.

Bernoulli Naïve Bayes: It is used for discrete data where all features are only in binary form.

So, when there is a situation when you have to find out first look of model for hundreds of thousands of data points and in very less time you can use Naïve Bayes theorem. This is extremely fast as compared to other classification Algorithm.

--

--

Priya Roy Chowdhury

IT working Professional and NLP Mahchine learning enthusiast. Try to learn new things and want to share to help others.