Clustering:iris data with Python

Using sklearn

1 Introduction

This document demonstrates how to perform clustering in Python using the scikit-learn library. Clustering is an unsupervised learning technique that groups similar data points together based on their inherent characteristics. We will use the iris dataset for this demonstration.

2 Load Data

First, we load the necessary libraries and the iris dataset.

Code
import pandas as pd
from sklearn.cluster import KMeans, AgglomerativeClustering
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
import seaborn as sns

# Load the iris dataset
iris = load_iris()
iris_df = pd.DataFrame(data=iris.data, columns=iris.feature_names)

# Standardize the data
scaler = StandardScaler()
scaled_iris = scaler.fit_transform(iris_df)

3 Elbow Method

The Elbow Method is a heuristic used to determine the optimal number of clusters in a dataset. We can visualize the total within-cluster sum of squares (WCSS) as a function of the number of clusters. 3 looks like a good number.

Code
wcss = []
for i in range(1, 11):
    kmeans = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0)
    kmeans.fit(scaled_iris)
    wcss.append(kmeans.inertia_)

plt.figure(figsize=(10, 6))
plt.plot(range(1, 11), wcss)
plt.title('Elbow Method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS')
plt.xticks(range(1, 11))
plt.show()

4 K-Means Clustering

K-Means is a popular clustering algorithm. We will use it to group the iris data into 3 clusters.

Code
kmeans = KMeans(n_clusters=3, random_state=42, n_init=10)
iris_df['kmeans_cluster'] = kmeans.fit_predict(scaled_iris)

# Visualize the clusters
plt.figure(figsize=(10, 6))
sns.scatterplot(x=scaled_iris[:, 0], y=scaled_iris[:, 1], hue=iris_df['kmeans_cluster'], palette='viridis', s=100)
plt.title('K-Means Clustering of Iris Data')
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1])
plt.show()

5 Hierarchical Clustering

Hierarchical clustering is another common clustering method. We can also visualize the result as a dendrogram.

Code
import scipy.cluster.hierarchy as shc

plt.figure(figsize=(10, 7))
plt.title("Iris Dendrogram")
dend = shc.dendrogram(shc.linkage(scaled_iris, method='ward'))
plt.show()

hierarchical = AgglomerativeClustering(n_clusters=3)
iris_df['hierarchical_cluster'] = hierarchical.fit_predict(scaled_iris)

# Visualize the clusters
plt.figure(figsize=(10, 6))
sns.scatterplot(x=scaled_iris[:, 0], y=scaled_iris[:, 1], hue=iris_df['hierarchical_cluster'], palette='viridis', s=100)
plt.title('Hierarchical Clustering of Iris Data')
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1])
plt.show()

6 Comparison with Real Groups

We can compare the clustering results with the actual species of the iris flowers.

Code
# K-Means Comparison
print("K-Means Clustering vs. Real Species")
print(pd.crosstab(iris_df['kmeans_cluster'], iris.target_names[iris.target]))
K-Means Clustering vs. Real Species
col_0           setosa  versicolor  virginica
kmeans_cluster                               
0                    0          39         14
1                   50           0          0
2                    0          11         36
Code
# Hierarchical Clustering Comparison
print("Hierarchical Clustering vs. Real Species")
print(pd.crosstab(iris_df['hierarchical_cluster'], iris.target_names[iris.target]))
Hierarchical Clustering vs. Real Species
col_0                 setosa  versicolor  virginica
hierarchical_cluster                               
0                          0          23         48
1                         49           0          0
2                          1          27          2

7 Comparison of K-Means and Hierarchical Clustering

Here’s a comparison of K-Means and Hierarchical Clustering:

Feature K-Means Clustering Hierarchical Clustering
Approach Partitioning (divides data into k clusters) Agglomerative (bottom-up) or Divisive (top-down)
Number of Clusters Requires pre-specification (k) Does not require pre-specification; dendrogram helps
Computational Cost Faster for large datasets Slower for large datasets (O(n^3) or O(n^2))
Cluster Shape Tends to form spherical clusters Can discover arbitrarily shaped clusters
Sensitivity to Outliers Sensitive to outliers Less sensitive to outliers
Interpretability Easy to interpret Dendrogram can be complex for large datasets
Reproducibility Can vary with initial centroids (unless fixed) Reproducible

8 Conclusion

This document provided a brief overview of clustering in Python using scikit-learn. We demonstrated both K-Means and Hierarchical clustering on the iris dataset.