This tutorial demonstrates the implementation of Logistic Regression from Scikit Learn library.

###Importing Libraries
import pandas as pd
from sklearn import datasets
from sklearn.linear_model import LogisticRegression as logreg
from sklearn.model_selection import train_test_split as tts
from sklearn import metrics

###Importing Dataset
iris = datasets.load_iris()
data = pd.DataFrame({"sl":iris.data[:,0], "sw":iris.data[:,1], "pl":iris.data[:,2], "pw":iris.data[:,3], 'species': iris.target})

###Splitting train/test data
X=data[['sl','sw','pl','pw']]
y=data["species"]
X_tr, X_ts, y_tr, y_ts = tts(X,y, test_size=30/100, random_state=None)

###Creating kNN Classifier Model
LOGR = logreg()
# help(LOGR)

###Training the Model
LOGR.fit(X_tr,y_tr)

###Making Predictions
y_pr=LOGR.predict(X_ts)
print(y_pr)

###Evaluating Prediction Accuracy
print("Acc %:",metrics.accuracy_score(y_ts, y_pr)*100)

###Making Prediction with Foreign Data
print(LOGR.predict([[1,1,0.5,6]]))

This tutorial aims to provide a simple, clear and reusable Logistic Regression implementation, so that, seasoned visitors can just take a look at it, comprehend what’s going on and copy/reproduce it for their own use cases without losing too much time.

On the other hand it gives an opportunity to visitors with less experience with Machine Learning and Python to experiment with algorithm. You can also check out this step-by-step implementation where everything is explained in detail.

If you’d like to see a step by step explanation of this algorithm you can check out this tutorial.

You can also see its history, read about the optimization parameters and find more examples in the main Logistic Regression page here.