Site Loader

Clustering, a type of unsupervised learning, is one of the main categories of machine learning. It has the potential to enable human-like cognition in A.I., but its results are not necessarily intuitive for humans, or similar to the way humans tend to group objects and create new categories.

Arguably this is because while clustering and discovery of ‘natural categories’ is a basic human cognitive activity, machine learning algorithms create clusters in very different ways from humans, and the results do not necessarily behave in the ways human-created clusters do. This problem is made worse by the fact that visualizing multi-dimensional clusters is very difficult, so we often need to more or less ‘trust’ the results of the algorithm (a common issue across machine learning techniques).

Helping people build better intuitions about how machine created clusters look and act could improve the situation. With this in mind, I’ve created a very simple practice dataset – the fruit dataset – to illustrate clustering results in a way that lets people more easily compare their human-centric clustering expectations to the results produced by different machine learning algorithms.

This small dataset is created from a number of images of apples and pears. The collection of images is intended to provoke some questions about what counts as a good clustering result, what strategies would lead to such a result, and how context dependent and generalizable such results are (or should be).

You can download the fruit dataset here, and the metadata for the dataset here.

Each image has a small number of other measures associated with it. Categorical, ordinal, binary, integer and continuous variables allow for the application and comparison of different distance metrics, clustering algorithms and clustering quality metrics. Some of these variables are intended to be more superficial while others are intended to be more closely connected to what we would consider the ‘natural kinds’ present in the dataset.

Most importantly, any results can be compared with an ‘eyeball analysis’ of the resulting clusters, by looking at the images grouped in each cluster to see how the results compare to our human-centric perspective on what a good clustering result would look. In doing this comparison, we can form our own opinions about how successful, or useful, different clustering strategies are.

Post Author: Jen Schellinck

Jen Schellinck is the principal of Sysabee and an adjunct professor at Carleton's Institute of Cognitive Science. She founded Sysabee in 2012 with the goal of taking analysis techniques from machine learning and systems modeling and making these available to organizations who are seeking to gain the benefits of technology supported analysis and decision making. For each project, she draws from a pool of expert consultants to create a team customized to the specific needs of the project. She is also the founding member of the Data Science Experts Group, an association of data professionals that build flexible, customized solutions for data-driven companies and organizations. She remains an active participant in academic research via Carleton’s Cognitive Modeling Lab.