This document demonstrates how to perform clustering in Python using the scikit-learn library. Clustering is an unsupervised learning technique that groups similar data points together based on their inherent characteristics. We will use the iris dataset for this demonstration.
2 Load Data
First, we load the necessary libraries and the iris dataset.
Code
import pandas as pdfrom sklearn.cluster import KMeans, AgglomerativeClusteringfrom sklearn.datasets import load_irisfrom sklearn.preprocessing import StandardScalerimport matplotlib.pyplot as pltimport seaborn as sns# Load the iris datasetiris = load_iris()iris_df = pd.DataFrame(data=iris.data, columns=iris.feature_names)# Standardize the datascaler = StandardScaler()scaled_iris = scaler.fit_transform(iris_df)
3 Elbow Method
The Elbow Method is a heuristic used to determine the optimal number of clusters in a dataset. We can visualize the total within-cluster sum of squares (WCSS) as a function of the number of clusters. 3 looks like a good number.
Hierarchical clustering is another common clustering method. We can also visualize the result as a dendrogram.
Code
import scipy.cluster.hierarchy as shcplt.figure(figsize=(10, 7))plt.title("Iris Dendrogram")dend = shc.dendrogram(shc.linkage(scaled_iris, method='ward'))plt.show()hierarchical = AgglomerativeClustering(n_clusters=3)iris_df['hierarchical_cluster'] = hierarchical.fit_predict(scaled_iris)# Visualize the clustersplt.figure(figsize=(10, 6))sns.scatterplot(x=scaled_iris[:, 0], y=scaled_iris[:, 1], hue=iris_df['hierarchical_cluster'], palette='viridis', s=100)plt.title('Hierarchical Clustering of Iris Data')plt.xlabel(iris.feature_names[0])plt.ylabel(iris.feature_names[1])plt.show()
6 Comparison with Real Groups
We can compare the clustering results with the actual species of the iris flowers.
Code
# K-Means Comparisonprint("K-Means Clustering vs. Real Species")print(pd.crosstab(iris_df['kmeans_cluster'], iris.target_names[iris.target]))
K-Means Clustering vs. Real Species
col_0 setosa versicolor virginica
kmeans_cluster
0 0 39 14
1 50 0 0
2 0 11 36
Code
# Hierarchical Clustering Comparisonprint("Hierarchical Clustering vs. Real Species")print(pd.crosstab(iris_df['hierarchical_cluster'], iris.target_names[iris.target]))
Hierarchical Clustering vs. Real Species
col_0 setosa versicolor virginica
hierarchical_cluster
0 0 23 48
1 49 0 0
2 1 27 2
7 Comparison of K-Means and Hierarchical Clustering
Here’s a comparison of K-Means and Hierarchical Clustering:
Feature
K-Means Clustering
Hierarchical Clustering
Approach
Partitioning (divides data into k clusters)
Agglomerative (bottom-up) or Divisive (top-down)
Number of Clusters
Requires pre-specification (k)
Does not require pre-specification; dendrogram helps
Computational Cost
Faster for large datasets
Slower for large datasets (O(n^3) or O(n^2))
Cluster Shape
Tends to form spherical clusters
Can discover arbitrarily shaped clusters
Sensitivity to Outliers
Sensitive to outliers
Less sensitive to outliers
Interpretability
Easy to interpret
Dendrogram can be complex for large datasets
Reproducibility
Can vary with initial centroids (unless fixed)
Reproducible
8 Conclusion
This document provided a brief overview of clustering in Python using scikit-learn. We demonstrated both K-Means and Hierarchical clustering on the iris dataset.
Source Code
---title: "Clustering:iris data with Python"subtitle: "Using sklearn"execute: warning: false error: falseformat: html: toc: true toc-location: right code-fold: show code-tools: true number-sections: true code-block-bg: true code-block-border-left: "#31BAE9"---## IntroductionThis document demonstrates how to perform clustering in Python using the `scikit-learn` library. Clustering is an unsupervised learning technique that groups similar data points together based on their inherent characteristics. We will use the `iris` dataset for this demonstration.## Load DataFirst, we load the necessary libraries and the `iris` dataset.```{python}#| label: load-data#| echo: trueimport pandas as pdfrom sklearn.cluster import KMeans, AgglomerativeClusteringfrom sklearn.datasets import load_irisfrom sklearn.preprocessing import StandardScalerimport matplotlib.pyplot as pltimport seaborn as sns# Load the iris datasetiris = load_iris()iris_df = pd.DataFrame(data=iris.data, columns=iris.feature_names)# Standardize the datascaler = StandardScaler()scaled_iris = scaler.fit_transform(iris_df)```## Elbow MethodThe Elbow Method is a heuristic used to determine the optimal number of clusters in a dataset. We can visualize the total within-cluster sum of squares (WCSS) as a function of the number of clusters. 3 looks like a good number.```{python}#| label: elbow-method#| echo: truewcss = []for i inrange(1, 11): kmeans = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0) kmeans.fit(scaled_iris) wcss.append(kmeans.inertia_)plt.figure(figsize=(10, 6))plt.plot(range(1, 11), wcss)plt.title('Elbow Method')plt.xlabel('Number of clusters')plt.ylabel('WCSS')plt.xticks(range(1, 11))plt.show()```## K-Means ClusteringK-Means is a popular clustering algorithm. We will use it to group the iris data into 3 clusters.```{python}#| label: kmeans#| echo: truekmeans = KMeans(n_clusters=3, random_state=42, n_init=10)iris_df['kmeans_cluster'] = kmeans.fit_predict(scaled_iris)# Visualize the clustersplt.figure(figsize=(10, 6))sns.scatterplot(x=scaled_iris[:, 0], y=scaled_iris[:, 1], hue=iris_df['kmeans_cluster'], palette='viridis', s=100)plt.title('K-Means Clustering of Iris Data')plt.xlabel(iris.feature_names[0])plt.ylabel(iris.feature_names[1])plt.show()```## Hierarchical ClusteringHierarchical clustering is another common clustering method. We can also visualize the result as a dendrogram.```{python}#| label: hclust#| echo: trueimport scipy.cluster.hierarchy as shcplt.figure(figsize=(10, 7))plt.title("Iris Dendrogram")dend = shc.dendrogram(shc.linkage(scaled_iris, method='ward'))plt.show()hierarchical = AgglomerativeClustering(n_clusters=3)iris_df['hierarchical_cluster'] = hierarchical.fit_predict(scaled_iris)# Visualize the clustersplt.figure(figsize=(10, 6))sns.scatterplot(x=scaled_iris[:, 0], y=scaled_iris[:, 1], hue=iris_df['hierarchical_cluster'], palette='viridis', s=100)plt.title('Hierarchical Clustering of Iris Data')plt.xlabel(iris.feature_names[0])plt.ylabel(iris.feature_names[1])plt.show()```## Comparison with Real GroupsWe can compare the clustering results with the actual species of the iris flowers.```{python}#| label: comparison#| echo: true# K-Means Comparisonprint("K-Means Clustering vs. Real Species")print(pd.crosstab(iris_df['kmeans_cluster'], iris.target_names[iris.target]))``````{python}# Hierarchical Clustering Comparisonprint("Hierarchical Clustering vs. Real Species")print(pd.crosstab(iris_df['hierarchical_cluster'], iris.target_names[iris.target]))```## Comparison of K-Means and Hierarchical ClusteringHere's a comparison of K-Means and Hierarchical Clustering:| Feature | K-Means Clustering | Hierarchical Clustering ||:--------------------|:-------------------------------------------------|:------------------------------------------------------|| **Approach** | Partitioning (divides data into k clusters) | Agglomerative (bottom-up) or Divisive (top-down) || **Number of Clusters** | Requires pre-specification (k) | Does not require pre-specification; dendrogram helps || **Computational Cost** | Faster for large datasets | Slower for large datasets (O(n^3) or O(n^2)) || **Cluster Shape** | Tends to form spherical clusters | Can discover arbitrarily shaped clusters || **Sensitivity to Outliers** | Sensitive to outliers | Less sensitive to outliers || **Interpretability** | Easy to interpret | Dendrogram can be complex for large datasets || **Reproducibility** | Can vary with initial centroids (unless fixed) | Reproducible |## ConclusionThis document provided a brief overview of clustering in Python using `scikit-learn`. We demonstrated both K-Means and Hierarchical clustering on the `iris` dataset.