Title: | "Finding Groups in Data": Cluster Analysis Extended Rousseeuw et al. |
---|---|
Description: | Methods for Cluster analysis. Much extended the original from Peter Rousseeuw, Anja Struyf and Mia Hubert, based on Kaufman and Rousseeuw (1990) "Finding Groups in Data". |
Authors: | Martin Maechler [aut, cre] , Peter Rousseeuw [aut] (Fortran original, <https://orcid.org/0000-0002-3807-5353>), Anja Struyf [aut] (S original), Mia Hubert [aut] (S original, <https://orcid.org/0000-0001-6398-4850>), Kurt Hornik [trl, ctb] (port to R; maintenance(1999-2000), <https://orcid.org/0000-0003-4198-9911>), Matthias Studer [ctb], Pierre Roudier [ctb], Juan Gonzalez [ctb], Kamil Kozlowski [ctb], Erich Schubert [ctb] (fastpam options for pam(), <https://orcid.org/0000-0001-9143-4880>), Keefe Murphy [ctb] (volume.ellipsoid({d >= 3})) |
Maintainer: | Martin Maechler <[email protected]> |
License: | GPL (>= 2) |
Version: | 2.1.6 |
Built: | 2024-10-31 22:08:50 UTC |
Source: | https://github.com/cran/cluster |
Computes agglomerative hierarchical clustering of the dataset.
agnes(x, diss = inherits(x, "dist"), metric = "euclidean", stand = FALSE, method = "average", par.method, keep.diss = n < 100, keep.data = !diss, trace.lev = 0)
agnes(x, diss = inherits(x, "dist"), metric = "euclidean", stand = FALSE, method = "average", par.method, keep.diss = n < 100, keep.data = !diss, trace.lev = 0)
x |
data matrix or data frame, or dissimilarity matrix, depending on the
value of the In case of a matrix or data frame, each row corresponds to an observation, and each column corresponds to a variable. All variables must be numeric. Missing values (NAs) are allowed. In case of a dissimilarity matrix, |
diss |
logical flag: if TRUE (default for |
metric |
character string specifying the metric to be used for calculating
dissimilarities between observations.
The currently available options are |
stand |
logical flag: if TRUE, then the measurements in |
method |
character string defining the clustering method. The six methods
implemented are
The default is |
par.method |
If |
keep.diss , keep.data
|
logicals indicating if the dissimilarities
and/or input data |
trace.lev |
integer specifying a trace level for printing
diagnostics during the algorithm. Default |
agnes
is fully described in chapter 5 of Kaufman and Rousseeuw (1990).
Compared to other agglomerative clustering methods such as hclust
,
agnes
has the following features: (a) it yields the
agglomerative coefficient (see agnes.object
)
which measures the amount of clustering structure found; and (b)
apart from the usual tree it also provides the banner, a novel
graphical display (see plot.agnes
).
The agnes
-algorithm constructs a hierarchy of clusterings.
At first, each observation is a small cluster by itself. Clusters are
merged until only one large cluster remains which contains all the
observations. At each stage the two nearest clusters are combined
to form one larger cluster.
For method="average"
, the distance between two clusters is the
average of the dissimilarities between the points in one cluster and the
points in the other cluster.
In method="single"
, we use the smallest dissimilarity between a
point in the first cluster and a point in the second cluster (nearest
neighbor method).
When method="complete"
, we use the largest dissimilarity
between a point in the first cluster and a point in the second cluster
(furthest neighbor method).
The method = "flexible"
allows (and requires) more details:
The Lance-Williams formula specifies how dissimilarities are
computed when clusters are agglomerated (equation (32) in K&R(1990),
p.237). If clusters and
are agglomerated into a
new cluster, the dissimilarity between their union and another
cluster
is given by
where the four coefficients
are specified by the vector
par.method
, either directly as vector of
length 4, or (more conveniently) if par.method
is of length 1,
say ,
par.method
is extended to
give the “Flexible Strategy” (K&R(1990), p.236 f) with
Lance-Williams coefficients .
Also, if length(par.method) == 3
, is set.
Care and expertise is probably needed when using method = "flexible"
particularly for the case when par.method
is specified of
longer length than one. Since cluster version 2.0, choices
leading to invalid merge
structures now signal an error (from
the C code already).
The weighted average (method="weighted"
) is the same as
method="flexible", par.method = 0.5
. Further,
method= "single"
is equivalent to method="flexible", par.method = c(.5,.5,0,-.5)
, and
method="complete"
is equivalent to method="flexible", par.method = c(.5,.5,0,+.5)
.
The method = "gaverage"
is a generalization of "average"
, aka
“flexible UPGMA” method, and is (a generalization of the approach)
detailed in Belbin et al. (1992). As "flexible"
, it uses the
Lance-Williams formula above for dissimilarity updating, but with
and
not constant, but proportional to
the sizes
and
of the clusters
and
respectively, i.e,
where ,
are determined from
par.method
,
either directly as or
with
, or (less flexibly,
but more conveniently) as follows:
Belbin et al proposed “flexible beta”, i.e. the user would only
specify (as
par.method
), sensibly in
and determines
and
as
and .
This may be specified by
par.method
(as length 1 vector),
and if par.method
is not specified, a default value of -0.1 is used,
as Belbin et al recommend taking a value around -0.1 as a general
agglomerative hierarchical clustering strategy.
Note that method = "gaverage", par.method = 0
(or par.method =
c(1,1,0,0)
) is equivalent to the agnes()
default method "average"
.
an object of class "agnes"
(which extends "twins"
)
representing the clustering. See agnes.object
for
details, and methods applicable.
Cluster analysis divides a dataset into groups (clusters) of observations that are similar to each other.
like
agnes
, diana
, and mona
construct a hierarchy of clusterings, with the number of clusters
ranging from one to the number of observations.
like
pam
, clara
, and fanny
require that the number of clusters be given by the user.
Method "gaverage"
has been contributed by Pierre Roudier, Landcare
Research, New Zealand.
Kaufman, L. and Rousseeuw, P.J. (1990). (=: “K&R(1990)”) Finding Groups in Data: An Introduction to Cluster Analysis. Wiley, New York.
Anja Struyf, Mia Hubert and Peter J. Rousseeuw (1996) Clustering in an Object-Oriented Environment. Journal of Statistical Software 1. doi:10.18637/jss.v001.i04
Struyf, A., Hubert, M. and Rousseeuw, P.J. (1997). Integrating Robust Clustering Techniques in S-PLUS, Computational Statistics and Data Analysis, 26, 17–37.
Lance, G.N., and W.T. Williams (1966). A General Theory of Classifactory Sorting Strategies, I. Hierarchical Systems. Computer J. 9, 373–380.
Belbin, L., Faith, D.P. and Milligan, G.W. (1992). A Comparison of Two Approaches to Beta-Flexible Clustering. Multivariate Behavioral Research, 27, 417–433.
agnes.object
, daisy
, diana
,
dist
, hclust
, plot.agnes
,
twins.object
.
data(votes.repub) agn1 <- agnes(votes.repub, metric = "manhattan", stand = TRUE) agn1 plot(agn1) op <- par(mfrow=c(2,2)) agn2 <- agnes(daisy(votes.repub), diss = TRUE, method = "complete") plot(agn2) ## alpha = 0.625 ==> beta = -1/4 is "recommended" by some agnS <- agnes(votes.repub, method = "flexible", par.meth = 0.625) plot(agnS) par(op) ## "show" equivalence of three "flexible" special cases d.vr <- daisy(votes.repub) a.wgt <- agnes(d.vr, method = "weighted") a.sing <- agnes(d.vr, method = "single") a.comp <- agnes(d.vr, method = "complete") iC <- -(6:7) # not using 'call' and 'method' for comparisons stopifnot( all.equal(a.wgt [iC], agnes(d.vr, method="flexible", par.method = 0.5)[iC]) , all.equal(a.sing[iC], agnes(d.vr, method="flex", par.method= c(.5,.5,0, -.5))[iC]), all.equal(a.comp[iC], agnes(d.vr, method="flex", par.method= c(.5,.5,0, +.5))[iC])) ## Exploring the dendrogram structure (d2 <- as.dendrogram(agn2)) # two main branches d2[[1]] # the first branch d2[[2]] # the 2nd one { 8 + 42 = 50 } d2[[1]][[1]]# first sub-branch of branch 1 .. and shorter form identical(d2[[c(1,1)]], d2[[1]][[1]]) ## a "textual picture" of the dendrogram : str(d2) data(agriculture) ## Plot similar to Figure 7 in ref ## Not run: plot(agnes(agriculture), ask = TRUE) data(animals) aa.a <- agnes(animals) # default method = "average" aa.ga <- agnes(animals, method = "gaverage") op <- par(mfcol=1:2, mgp=c(1.5, 0.6, 0), mar=c(.1+ c(4,3,2,1)), cex.main=0.8) plot(aa.a, which.plot = 2) plot(aa.ga, which.plot = 2) par(op) ## Show how "gaverage" is a "generalized average": aa.ga.0 <- agnes(animals, method = "gaverage", par.method = 0) stopifnot(all.equal(aa.ga.0[iC], aa.a[iC]))
data(votes.repub) agn1 <- agnes(votes.repub, metric = "manhattan", stand = TRUE) agn1 plot(agn1) op <- par(mfrow=c(2,2)) agn2 <- agnes(daisy(votes.repub), diss = TRUE, method = "complete") plot(agn2) ## alpha = 0.625 ==> beta = -1/4 is "recommended" by some agnS <- agnes(votes.repub, method = "flexible", par.meth = 0.625) plot(agnS) par(op) ## "show" equivalence of three "flexible" special cases d.vr <- daisy(votes.repub) a.wgt <- agnes(d.vr, method = "weighted") a.sing <- agnes(d.vr, method = "single") a.comp <- agnes(d.vr, method = "complete") iC <- -(6:7) # not using 'call' and 'method' for comparisons stopifnot( all.equal(a.wgt [iC], agnes(d.vr, method="flexible", par.method = 0.5)[iC]) , all.equal(a.sing[iC], agnes(d.vr, method="flex", par.method= c(.5,.5,0, -.5))[iC]), all.equal(a.comp[iC], agnes(d.vr, method="flex", par.method= c(.5,.5,0, +.5))[iC])) ## Exploring the dendrogram structure (d2 <- as.dendrogram(agn2)) # two main branches d2[[1]] # the first branch d2[[2]] # the 2nd one { 8 + 42 = 50 } d2[[1]][[1]]# first sub-branch of branch 1 .. and shorter form identical(d2[[c(1,1)]], d2[[1]][[1]]) ## a "textual picture" of the dendrogram : str(d2) data(agriculture) ## Plot similar to Figure 7 in ref ## Not run: plot(agnes(agriculture), ask = TRUE) data(animals) aa.a <- agnes(animals) # default method = "average" aa.ga <- agnes(animals, method = "gaverage") op <- par(mfcol=1:2, mgp=c(1.5, 0.6, 0), mar=c(.1+ c(4,3,2,1)), cex.main=0.8) plot(aa.a, which.plot = 2) plot(aa.ga, which.plot = 2) par(op) ## Show how "gaverage" is a "generalized average": aa.ga.0 <- agnes(animals, method = "gaverage", par.method = 0) stopifnot(all.equal(aa.ga.0[iC], aa.a[iC]))
The objects of class "agnes"
represent an agglomerative hierarchical clustering of a dataset.
A legitimate agnes
object is a list with the following components:
order |
a vector giving a permutation of the original observations to allow for plotting, in the sense that the branches of a clustering tree will not cross. |
order.lab |
a vector similar to |
height |
a vector with the distances between merging clusters at the successive stages. |
ac |
the agglomerative coefficient, measuring the clustering structure of the dataset. For each observation i, denote by m(i) its dissimilarity to the
first cluster it is merged with, divided by the dissimilarity of the
merger in the final step of the algorithm. The |
merge |
an (n-1) by 2 matrix, where n is the number of observations. Row i
of |
diss |
an object of class |
data |
a matrix containing the original or standardized measurements, depending
on the |
This class of objects is returned from agnes
.
The "agnes"
class has methods for the following generic functions:
print
, summary
, plot
, and
as.dendrogram
.
In addition, cutree(x, *)
can be used to “cut”
the dendrogram in order to produce cluster assignments.
The class "agnes"
inherits from "twins"
.
Therefore, the generic functions pltree
and
as.hclust
are available for agnes
objects.
After applying as.hclust()
, all its methods are
available, of course.
agnes
, diana
,
as.hclust
, hclust
,
plot.agnes
, twins.object
.
data(agriculture) ag.ag <- agnes(agriculture) class(ag.ag) pltree(ag.ag) # the dendrogram ## cut the dendrogram -> get cluster assignments: (ck3 <- cutree(ag.ag, k = 3)) (ch6 <- cutree(as.hclust(ag.ag), h = 6)) stopifnot(identical(unname(ch6), ck3))
data(agriculture) ag.ag <- agnes(agriculture) class(ag.ag) pltree(ag.ag) # the dendrogram ## cut the dendrogram -> get cluster assignments: (ck3 <- cutree(ag.ag, k = 3)) (ch6 <- cutree(as.hclust(ag.ag), h = 6)) stopifnot(identical(unname(ch6), ck3))
Gross National Product (GNP) per capita and percentage of the population working in agriculture for each country belonging to the European Union in 1993.
data(agriculture)
data(agriculture)
A data frame with 12 observations on 2 variables:
[ , 1] | x |
numeric | per capita GNP |
[ , 2] | y |
numeric | percentage in agriculture |
The row names of the data frame indicate the countries.
The data seem to show two clusters, the “more agricultural” one consisting of Greece, Portugal, Spain, and Ireland.
Eurostat (European Statistical Agency, 1994): Cijfers en feiten: Een statistisch portret van de Europese Unie.
see those in agnes
.
data(agriculture) ## Compute the dissimilarities using Euclidean metric and without ## standardization daisy(agriculture, metric = "euclidean", stand = FALSE) ## 2nd plot is similar to Figure 3 in Struyf et al (1996) plot(pam(agriculture, 2)) ## Plot similar to Figure 7 in Struyf et al (1996) ## Not run: plot(agnes(agriculture), ask = TRUE) ## Plot similar to Figure 8 in Struyf et al (1996) ## Not run: plot(diana(agriculture), ask = TRUE)
data(agriculture) ## Compute the dissimilarities using Euclidean metric and without ## standardization daisy(agriculture, metric = "euclidean", stand = FALSE) ## 2nd plot is similar to Figure 3 in Struyf et al (1996) plot(pam(agriculture, 2)) ## Plot similar to Figure 7 in Struyf et al (1996) ## Not run: plot(agnes(agriculture), ask = TRUE) ## Plot similar to Figure 8 in Struyf et al (1996) ## Not run: plot(diana(agriculture), ask = TRUE)
This data set considers 6 binary attributes for 20 animals.
data(animals)
data(animals)
A data frame with 20 observations on 6 variables:
[ , 1] | war | warm-blooded |
[ , 2] | fly | can fly |
[ , 3] | ver | vertebrate |
[ , 4] | end | endangered |
[ , 5] | gro | live in groups |
[ , 6] | hai | have hair |
All variables are encoded as 1 = 'no', 2 = 'yes'.
This dataset is useful for illustrating monothetic (only a single variable is used for each split) hierarchical clustering.
Leonard Kaufman and Peter J. Rousseeuw (1990): Finding Groups in Data (pp 297ff). New York: Wiley.
see Struyf, Hubert & Rousseeuw (1996), in agnes
.
data(animals) apply(animals,2, table) # simple overview ma <- mona(animals) ma ## Plot similar to Figure 10 in Struyf et al (1996) plot(ma)
data(animals) apply(animals,2, table) # simple overview ma <- mona(animals) ma ## Plot similar to Figure 10 in Struyf et al (1996) plot(ma)
Draws a “banner”, i.e. basically a horizontal barplot
visualizing the (agglomerative or divisive) hierarchical clustering or
an other binary dendrogram structure.
bannerplot(x, w = rev(x$height), fromLeft = TRUE, main=NULL, sub=NULL, xlab = "Height", adj = 0, col = c(2, 0), border = 0, axes = TRUE, frame.plot = axes, rev.xax = !fromLeft, xax.pretty = TRUE, labels = NULL, nmax.lab = 35, max.strlen = 5, yax.do = axes && length(x$order) <= nmax.lab, yaxRight = fromLeft, y.mar = 2.4 + max.strlen/2.5, ...)
bannerplot(x, w = rev(x$height), fromLeft = TRUE, main=NULL, sub=NULL, xlab = "Height", adj = 0, col = c(2, 0), border = 0, axes = TRUE, frame.plot = axes, rev.xax = !fromLeft, xax.pretty = TRUE, labels = NULL, nmax.lab = 35, max.strlen = 5, yax.do = axes && length(x$order) <= nmax.lab, yaxRight = fromLeft, y.mar = 2.4 + max.strlen/2.5, ...)
x |
a list with components |
w |
non-negative numeric vector of bar widths. |
fromLeft |
logical, indicating if the banner is from the left or not. |
main , sub
|
main and sub titles, see |
xlab |
x axis label (with ‘correct’ default e.g. for
|
adj |
passed to |
col |
vector of length 2, for two horizontal segments. |
border |
color for bar border; now defaults to background (no border). |
axes |
logical indicating if axes (and labels) should be drawn at all. |
frame.plot |
logical indicating the banner should be framed;
mainly used when |
rev.xax |
logical indicating if the x axis should be reversed (as
in |
xax.pretty |
logical or integer indicating if
|
labels |
labels to use on y-axis; the default is constructed from
|
nmax.lab |
integer indicating the number of labels which is considered too large for single-name labelling the banner plot. |
max.strlen |
positive integer giving the length to which strings are truncated in banner plot labeling. |
yax.do |
logical indicating if a y axis and banner labels should be drawn. |
yaxRight |
logical indicating if the y axis is on the right or left. |
y.mar |
positive number specifying the margin width to use when banners are labeled (along a y-axis). The default adapts to the string width and optimally would also dependend on the font. |
... |
graphical parameters (see |
This is mainly a utility called from plot.agnes
,
plot.diana
and plot.mona
.
Martin Maechler (from original code of Kaufman and Rousseeuw).
data(agriculture) bannerplot(agnes(agriculture), main = "Bannerplot")
data(agriculture) bannerplot(agnes(agriculture), main = "Bannerplot")
This is a small rounded subset of the C-horizon data
chorizon
from package mvoutlier.
data(chorSub)
data(chorSub)
A data frame with 61 observations on 10 variables. The variables contain scaled concentrations of chemical elements.
This data set was produced from chorizon
via these statements:
data(chorizon, package = "mvoutlier") chorSub <- round(100*scale(chorizon[,101:110]))[190:250,] storage.mode(chorSub) <- "integer" colnames(chorSub) <- gsub("_.*", '', colnames(chorSub))
Kola Project (1993-1998)
chorizon
in package mvoutlier and other
Kola data in the same package.
data(chorSub) summary(chorSub) pairs(chorSub, gap= .1)# some outliers
data(chorSub) summary(chorSub) pairs(chorSub, gap= .1)# some outliers
Computes a "clara"
object, a list
representing a
clustering of the data into k
clusters.
clara(x, k, metric = c("euclidean", "manhattan", "jaccard"), stand = FALSE, cluster.only = FALSE, samples = 5, sampsize = min(n, 40 + 2 * k), trace = 0, medoids.x = TRUE, keep.data = medoids.x, rngR = FALSE, pamLike = FALSE, correct.d = TRUE)
clara(x, k, metric = c("euclidean", "manhattan", "jaccard"), stand = FALSE, cluster.only = FALSE, samples = 5, sampsize = min(n, 40 + 2 * k), trace = 0, medoids.x = TRUE, keep.data = medoids.x, rngR = FALSE, pamLike = FALSE, correct.d = TRUE)
x |
data matrix or data frame, each row corresponds to an observation, and each column corresponds to a variable. All variables must be numeric (or logical). Missing values (NAs) are allowed. |
k |
integer, the number of clusters.
It is required that |
metric |
character string specifying the metric to be used for calculating dissimilarities between observations. The currently available options are "euclidean", "manhattan", "jaccard". Euclidean distances are root sum-of-squares of differences, and manhattan distances are the sum of absolute differences. |
stand |
logical, indicating if the measurements in |
cluster.only |
logical; if true, only the clustering will be computed and returned, see details. |
samples |
integer, say |
sampsize |
integer, say |
trace |
integer indicating a trace level for diagnostic output during the algorithm. |
medoids.x |
logical indicating if the medoids should be
returned, identically to some rows of the input data |
keep.data |
logical indicating if the (scaled if
|
rngR |
logical indicating if R's random number generator should
be used instead of the primitive clara()-builtin one. If true, this
also means that each call to |
pamLike |
logical indicating if the “swap” phase (see
|
correct.d |
logical or integer indicating that—only in the case
of Because the new correct formula is not back compatible, for the time
being, a warning is signalled in this case, unless the user explicitly
specifies |
clara
(for "euclidean" and "manhattan") is fully described in
chapter 3 of Kaufman and Rousseeuw (1990).
Compared to other partitioning methods such as pam
, it can deal with
much larger datasets. Internally, this is achieved by considering
sub-datasets of fixed size (sampsize
) such that the time and
storage requirements become linear in rather than quadratic.
Each sub-dataset is partitioned into k
clusters using the same
algorithm as in pam
.
Once k
representative objects have been selected from the
sub-dataset, each observation of the entire dataset is assigned
to the nearest medoid.
The mean (equivalent to the sum) of the dissimilarities of the observations to their closest medoid is used as a measure of the quality of the clustering. The sub-dataset for which the mean (or sum) is minimal, is retained. A further analysis is carried out on the final partition.
Each sub-dataset is forced to contain the medoids obtained from the
best sub-dataset until then. Randomly drawn observations are added to
this set until sampsize
has been reached.
When cluster.only
is true, the result is simply a (possibly
named) integer vector specifying the clustering, i.e.,clara(x,k, cluster.only=TRUE)
is the same as clara(x,k)$clustering
but computed more efficiently.
If cluster.only
is false (as by default),
an object of class "clara"
representing the clustering. See
clara.object
for details.
If cluster.only
is true, the result is the "clustering", an
integer vector of length with entries from
1:k
.
By default, the random sampling is implemented with a very
simple scheme (with period ) inside the Fortran
code, independently of R's random number generation, and as a matter
of fact, deterministically. Alternatively, we recommend setting
rngR = TRUE
which uses R's random number generators. Then,
clara()
results are made reproducible typically by using
set.seed()
before calling clara
.
The storage requirement of clara
computation (for small
k
) is about
where
, and
.
The CPU computing time (again assuming small
k
) is about
, where
.
For “small” datasets, the function pam
can be used
directly. What can be considered small, is really a function
of available computing power, both memory (RAM) and speed.
Originally (1990), “small” meant less than 100 observations;
in 1997, the authors said “small (say with fewer than 200
observations)”; as of 2006, you can use pam
with
several thousand observations.
Kaufman and Rousseeuw (see agnes
), originally.
Metric "jaccard"
: Kamil Kozlowski (@ownedoutcomes.com
)
and Kamil Jadeszko.
All arguments from trace
on, and most R documentation and all
tests by Martin Maechler.
agnes
for background and references;
clara.object
, pam
,
partition.object
, plot.partition
.
## generate 500 objects, divided into 2 clusters. x <- rbind(cbind(rnorm(200,0,8), rnorm(200,0,8)), cbind(rnorm(300,50,8), rnorm(300,50,8))) clarax <- clara(x, 2, samples=50) clarax clarax$clusinfo ## using pamLike=TRUE gives the same (apart from the 'call'): all.equal(clarax[-8], clara(x, 2, samples=50, pamLike = TRUE)[-8]) plot(clarax) ## cluster.only = TRUE -- save some memory/time : clclus <- clara(x, 2, samples=50, cluster.only = TRUE) stopifnot(identical(clclus, clarax$clustering)) ## 'xclara' is an artificial data set with 3 clusters of 1000 bivariate ## objects each. data(xclara) (clx3 <- clara(xclara, 3)) ## "better" number of samples cl.3 <- clara(xclara, 3, samples=100) ## but that did not change the result here: stopifnot(cl.3$clustering == clx3$clustering) ## Plot similar to Figure 5 in Struyf et al (1996) ## Not run: plot(clx3, ask = TRUE) ## Try 100 times *different* random samples -- for reliability: nSim <- 100 nCl <- 3 # = no.classes set.seed(421)# (reproducibility) cl <- matrix(NA,nrow(xclara), nSim) for(i in 1:nSim) cl[,i] <- clara(xclara, nCl, medoids.x = FALSE, rngR = TRUE)$cluster tcl <- apply(cl,1, tabulate, nbins = nCl) ## those that are not always in same cluster (5 out of 3000 for this seed): (iDoubt <- which(apply(tcl,2, function(n) all(n < nSim)))) if(length(iDoubt)) { # (not for all seeds) tabD <- tcl[,iDoubt, drop=FALSE] dimnames(tabD) <- list(cluster = paste(1:nCl), obs = format(iDoubt)) t(tabD) # how many times in which clusters }
## generate 500 objects, divided into 2 clusters. x <- rbind(cbind(rnorm(200,0,8), rnorm(200,0,8)), cbind(rnorm(300,50,8), rnorm(300,50,8))) clarax <- clara(x, 2, samples=50) clarax clarax$clusinfo ## using pamLike=TRUE gives the same (apart from the 'call'): all.equal(clarax[-8], clara(x, 2, samples=50, pamLike = TRUE)[-8]) plot(clarax) ## cluster.only = TRUE -- save some memory/time : clclus <- clara(x, 2, samples=50, cluster.only = TRUE) stopifnot(identical(clclus, clarax$clustering)) ## 'xclara' is an artificial data set with 3 clusters of 1000 bivariate ## objects each. data(xclara) (clx3 <- clara(xclara, 3)) ## "better" number of samples cl.3 <- clara(xclara, 3, samples=100) ## but that did not change the result here: stopifnot(cl.3$clustering == clx3$clustering) ## Plot similar to Figure 5 in Struyf et al (1996) ## Not run: plot(clx3, ask = TRUE) ## Try 100 times *different* random samples -- for reliability: nSim <- 100 nCl <- 3 # = no.classes set.seed(421)# (reproducibility) cl <- matrix(NA,nrow(xclara), nSim) for(i in 1:nSim) cl[,i] <- clara(xclara, nCl, medoids.x = FALSE, rngR = TRUE)$cluster tcl <- apply(cl,1, tabulate, nbins = nCl) ## those that are not always in same cluster (5 out of 3000 for this seed): (iDoubt <- which(apply(tcl,2, function(n) all(n < nSim)))) if(length(iDoubt)) { # (not for all seeds) tabD <- tcl[,iDoubt, drop=FALSE] dimnames(tabD) <- list(cluster = paste(1:nCl), obs = format(iDoubt)) t(tabD) # how many times in which clusters }
The objects of class "clara"
represent a partitioning of a large
dataset into clusters and are typically returned from clara
.
A legitimate clara
object is a list with the following components:
sample |
labels or case numbers of the observations in the best sample, that is,
the sample used by the |
medoids |
the medoids or representative objects of the clusters.
It is a matrix with in each row the coordinates of one medoid.
Possibly |
i.med |
the indices of the |
clustering |
the clustering vector, see |
objective |
the objective function for the final clustering of the entire dataset. |
clusinfo |
matrix, each row gives numerical information for one cluster. These are the cardinality of the cluster (number of observations), the maximal and average dissimilarity between the observations in the cluster and the cluster's medoid. The last column is the maximal dissimilarity between the observations in the cluster and the cluster's medoid, divided by the minimal dissimilarity between the cluster's medoid and the medoid of any other cluster. If this ratio is small, the cluster is well-separated from the other clusters. |
diss |
dissimilarity (maybe NULL), see |
silinfo |
list with silhouette width information for the best sample, see
|
call |
generating call, see |
data |
matrix, possibibly standardized, or NULL, see
|
The "clara"
class has methods for the following generic functions:
print
, summary
.
The class "clara"
inherits from "partition"
.
Therefore, the generic functions plot
and clusplot
can
be used on a clara
object.
clara
, dissimilarity.object
,
partition.object
, plot.partition
.
clusGap()
calculates a goodness of clustering measure, the
“gap” statistic. For each number of clusters , it
compares
with
where the latter is defined via
bootstrapping, i.e., simulating from a reference (
)
distribution, a uniform distribution on the hypercube determined by
the ranges of
x
, after first centering, and then
svd
(aka ‘PCA’)-rotating them when (as by
default) spaceH0 = "scaledPCA"
.
maxSE(f, SE.f)
determines the location of the maximum
of f
, taking a “1-SE rule” into account for the
*SE*
methods. The default method "firstSEmax"
looks for
the smallest such that its value
is not more than 1
standard error away from the first local maximum.
This is similar but not the same as
"Tibs2001SEmax"
, Tibshirani
et al's recommendation of determining the number of clusters from the
gap statistics and their standard deviations.
clusGap(x, FUNcluster, K.max, B = 100, d.power = 1, spaceH0 = c("scaledPCA", "original"), verbose = interactive(), ...) maxSE(f, SE.f, method = c("firstSEmax", "Tibs2001SEmax", "globalSEmax", "firstmax", "globalmax"), SE.factor = 1) ## S3 method for class 'clusGap' print(x, method = "firstSEmax", SE.factor = 1, ...) ## S3 method for class 'clusGap' plot(x, type = "b", xlab = "k", ylab = expression(Gap[k]), main = NULL, do.arrows = TRUE, arrowArgs = list(col="red3", length=1/16, angle=90, code=3), ...)
clusGap(x, FUNcluster, K.max, B = 100, d.power = 1, spaceH0 = c("scaledPCA", "original"), verbose = interactive(), ...) maxSE(f, SE.f, method = c("firstSEmax", "Tibs2001SEmax", "globalSEmax", "firstmax", "globalmax"), SE.factor = 1) ## S3 method for class 'clusGap' print(x, method = "firstSEmax", SE.factor = 1, ...) ## S3 method for class 'clusGap' plot(x, type = "b", xlab = "k", ylab = expression(Gap[k]), main = NULL, do.arrows = TRUE, arrowArgs = list(col="red3", length=1/16, angle=90, code=3), ...)
x |
numeric matrix or |
FUNcluster |
a |
K.max |
the maximum number of clusters to consider, must be at least two. |
B |
integer, number of Monte Carlo (“bootstrap”) samples. |
d.power |
a positive integer specifying the power |
spaceH0 |
a |
verbose |
integer or logical, determining if “progress” output should be printed. The default prints one bit per bootstrap sample. |
... |
(for |
f |
numeric vector of ‘function values’, of length
|
SE.f |
numeric vector of length |
method |
character string indicating how the “optimal”
number of clusters,
See the examples for a comparison in a simple case. |
SE.factor |
[When |
type , xlab , ylab , main
|
arguments with the same meaning as in
|
do.arrows |
logical indicating if (1 SE -)“error bars”
should be drawn, via |
arrowArgs |
a list of arguments passed to |
The main result <res>$Tab[,"gap"]
of course is from
bootstrapping aka Monte Carlo simulation and hence random, or
equivalently, depending on the initial random seed (see
set.seed()
).
On the other hand, in our experience, using B = 500
gives
quite precise results such that the gap plot is basically unchanged
after an another run.
clusGap(..)
returns an object of S3 class "clusGap"
,
basically a list with components
Tab |
a matrix with |
call |
the |
spaceH0 |
the |
n |
number of observations, i.e., |
B |
input |
FUNcluster |
input function |
This function is originally based on the functions gap
of
former (Bioconductor) package SAGx by Per Broberg,
gapStat()
from former package SLmisc by Matthias Kohl
and ideas from gap()
and its methods of package lga by
Justin Harrington.
The current implementation is by Martin Maechler.
The implementation of spaceH0 = "original"
is based on code
proposed by Juan Gonzalez.
Tibshirani, R., Walther, G. and Hastie, T. (2001). Estimating the number of data clusters via the Gap statistic. Journal of the Royal Statistical Society B, 63, 411–423.
Tibshirani, R., Walther, G. and Hastie, T. (2000). Estimating the number of clusters in a dataset via the Gap statistic. Technical Report. Stanford.
Dudoit, S. and Fridlyand, J. (2002) A prediction-based resampling method for estimating the number of clusters in a dataset. Genome Biology 3(7). doi:10.1186/gb-2002-3-7-research0036
Per Broberg (2006). SAGx: Statistical Analysis of the GeneChip. R package version 1.9.7. https://bioconductor.org/packages/3.12/bioc/html/SAGx.html Deprecated and removed from Bioc ca. 2022
silhouette
for a much simpler less sophisticated
goodness of clustering measure.
cluster.stats()
in package fpc for
alternative measures.
### --- maxSE() methods ------------------------------------------- (mets <- eval(formals(maxSE)$method)) fk <- c(2,3,5,4,7,8,5,4) sk <- c(1,1,2,1,1,3,1,1)/2 ## use plot.clusGap(): plot(structure(class="clusGap", list(Tab = cbind(gap=fk, SE.sim=sk)))) ## Note that 'firstmax' and 'globalmax' are always at 3 and 6 : sapply(c(1/4, 1,2,4), function(SEf) sapply(mets, function(M) maxSE(fk, sk, method = M, SE.factor = SEf))) ### --- clusGap() ------------------------------------------------- ## ridiculously nicely separated clusters in 3 D : x <- rbind(matrix(rnorm(150, sd = 0.1), ncol = 3), matrix(rnorm(150, mean = 1, sd = 0.1), ncol = 3), matrix(rnorm(150, mean = 2, sd = 0.1), ncol = 3), matrix(rnorm(150, mean = 3, sd = 0.1), ncol = 3)) ## Slightly faster way to use pam (see below) pam1 <- function(x,k) list(cluster = pam(x,k, cluster.only=TRUE)) ## We do not recommend using hier.clustering here, but if you want, ## there is factoextra::hcut () or a cheap version of it hclusCut <- function(x, k, d.meth = "euclidean", ...) list(cluster = cutree(hclust(dist(x, method=d.meth), ...), k=k)) ## You can manually set it before running this : doExtras <- TRUE # or FALSE if(!(exists("doExtras") && is.logical(doExtras))) doExtras <- cluster:::doExtras() if(doExtras) { ## Note we use B = 60 in the following examples to keep them "speedy". ## ---- rather keep the default B = 500 for your analysis! ## note we can pass 'nstart = 20' to kmeans() : gskmn <- clusGap(x, FUN = kmeans, nstart = 20, K.max = 8, B = 60) gskmn #-> its print() method plot(gskmn, main = "clusGap(., FUN = kmeans, n.start=20, B= 60)") set.seed(12); system.time( gsPam0 <- clusGap(x, FUN = pam, K.max = 8, B = 60) ) set.seed(12); system.time( gsPam1 <- clusGap(x, FUN = pam1, K.max = 8, B = 60) ) ## and show that it gives the "same": not.eq <- c("call", "FUNcluster"); n <- names(gsPam0) eq <- n[!(n %in% not.eq)] stopifnot(identical(gsPam1[eq], gsPam0[eq])) print(gsPam1, method="globalSEmax") print(gsPam1, method="globalmax") print(gsHc <- clusGap(x, FUN = hclusCut, K.max = 8, B = 60)) }# end {doExtras} gs.pam.RU <- clusGap(ruspini, FUN = pam1, K.max = 8, B = 60) gs.pam.RU plot(gs.pam.RU, main = "Gap statistic for the 'ruspini' data") mtext("k = 4 is best .. and k = 5 pretty close") ## This takes a minute.. ## No clustering ==> k = 1 ("one cluster") should be optimal: Z <- matrix(rnorm(256*3), 256,3) gsP.Z <- clusGap(Z, FUN = pam1, K.max = 8, B = 200) plot(gsP.Z, main = "clusGap(<iid_rnorm_p=3>) ==> k = 1 cluster is optimal") gsP.Z
### --- maxSE() methods ------------------------------------------- (mets <- eval(formals(maxSE)$method)) fk <- c(2,3,5,4,7,8,5,4) sk <- c(1,1,2,1,1,3,1,1)/2 ## use plot.clusGap(): plot(structure(class="clusGap", list(Tab = cbind(gap=fk, SE.sim=sk)))) ## Note that 'firstmax' and 'globalmax' are always at 3 and 6 : sapply(c(1/4, 1,2,4), function(SEf) sapply(mets, function(M) maxSE(fk, sk, method = M, SE.factor = SEf))) ### --- clusGap() ------------------------------------------------- ## ridiculously nicely separated clusters in 3 D : x <- rbind(matrix(rnorm(150, sd = 0.1), ncol = 3), matrix(rnorm(150, mean = 1, sd = 0.1), ncol = 3), matrix(rnorm(150, mean = 2, sd = 0.1), ncol = 3), matrix(rnorm(150, mean = 3, sd = 0.1), ncol = 3)) ## Slightly faster way to use pam (see below) pam1 <- function(x,k) list(cluster = pam(x,k, cluster.only=TRUE)) ## We do not recommend using hier.clustering here, but if you want, ## there is factoextra::hcut () or a cheap version of it hclusCut <- function(x, k, d.meth = "euclidean", ...) list(cluster = cutree(hclust(dist(x, method=d.meth), ...), k=k)) ## You can manually set it before running this : doExtras <- TRUE # or FALSE if(!(exists("doExtras") && is.logical(doExtras))) doExtras <- cluster:::doExtras() if(doExtras) { ## Note we use B = 60 in the following examples to keep them "speedy". ## ---- rather keep the default B = 500 for your analysis! ## note we can pass 'nstart = 20' to kmeans() : gskmn <- clusGap(x, FUN = kmeans, nstart = 20, K.max = 8, B = 60) gskmn #-> its print() method plot(gskmn, main = "clusGap(., FUN = kmeans, n.start=20, B= 60)") set.seed(12); system.time( gsPam0 <- clusGap(x, FUN = pam, K.max = 8, B = 60) ) set.seed(12); system.time( gsPam1 <- clusGap(x, FUN = pam1, K.max = 8, B = 60) ) ## and show that it gives the "same": not.eq <- c("call", "FUNcluster"); n <- names(gsPam0) eq <- n[!(n %in% not.eq)] stopifnot(identical(gsPam1[eq], gsPam0[eq])) print(gsPam1, method="globalSEmax") print(gsPam1, method="globalmax") print(gsHc <- clusGap(x, FUN = hclusCut, K.max = 8, B = 60)) }# end {doExtras} gs.pam.RU <- clusGap(ruspini, FUN = pam1, K.max = 8, B = 60) gs.pam.RU plot(gs.pam.RU, main = "Gap statistic for the 'ruspini' data") mtext("k = 4 is best .. and k = 5 pretty close") ## This takes a minute.. ## No clustering ==> k = 1 ("one cluster") should be optimal: Z <- matrix(rnorm(256*3), 256,3) gsP.Z <- clusGap(Z, FUN = pam1, K.max = 8, B = 200) plot(gsP.Z, main = "clusGap(<iid_rnorm_p=3>) ==> k = 1 cluster is optimal") gsP.Z
Draws a 2-dimensional “clusplot” (clustering plot) on the
current graphics device.
The generic function has a default and a partition
method.
clusplot(x, ...) ## S3 method for class 'partition' clusplot(x, main = NULL, dist = NULL, ...)
clusplot(x, ...) ## S3 method for class 'partition' clusplot(x, main = NULL, dist = NULL, ...)
x |
an R object, here, specifically an object of class
|
main |
title for the plot; when |
dist |
when |
... |
optional arguments passed to methods, notably the
|
The clusplot.partition()
method relies on clusplot.default
.
If the clustering algorithms pam
, fanny
and clara
are applied to a data matrix of observations-by-variables then a
clusplot of the resulting clustering can always be drawn. When the
data matrix contains missing values and the clustering is performed
with pam
or fanny
, the dissimilarity
matrix will be given as input to clusplot
. When the clustering
algorithm clara
was applied to a data matrix with NA
s
then clusplot()
will replace the missing values as described in
clusplot.default
, because a dissimilarity matrix is not
available.
For the partition
(and default
) method: An invisible
list with components Distances
and Shading
, as for
clusplot.default
, see there.
a 2-dimensional clusplot is created on the current graphics device.
clusplot.default
for references;
partition.object
, pam
,
pam.object
, clara
,
clara.object
, fanny
,
fanny.object
, par
.
## For more, see ?clusplot.default ## generate 25 objects, divided into 2 clusters. x <- rbind(cbind(rnorm(10,0,0.5), rnorm(10,0,0.5)), cbind(rnorm(15,5,0.5), rnorm(15,5,0.5))) clusplot(pam(x, 2)) ## add noise, and try again : x4 <- cbind(x, rnorm(25), rnorm(25)) clusplot(pam(x4, 2))
## For more, see ?clusplot.default ## generate 25 objects, divided into 2 clusters. x <- rbind(cbind(rnorm(10,0,0.5), rnorm(10,0,0.5)), cbind(rnorm(15,5,0.5), rnorm(15,5,0.5))) clusplot(pam(x, 2)) ## add noise, and try again : x4 <- cbind(x, rnorm(25), rnorm(25)) clusplot(pam(x4, 2))
Creates a bivariate plot visualizing a partition (clustering) of the data. All observation are represented by points in the plot, using principal components or multidimensional scaling. Around each cluster an ellipse is drawn.
## Default S3 method: clusplot(x, clus, diss = FALSE, s.x.2d = mkCheckX(x, diss), stand = FALSE, lines = 2, shade = FALSE, color = FALSE, labels= 0, plotchar = TRUE, col.p = "dark green", col.txt = col.p, col.clus = if(color) c(2, 4, 6, 3) else 5, cex = 1, cex.txt = cex, span = TRUE, add = FALSE, xlim = NULL, ylim = NULL, main = paste("CLUSPLOT(", deparse1(substitute(x)),")"), sub = paste("These two components explain", round(100 * var.dec, digits = 2), "% of the point variability."), xlab = "Component 1", ylab = "Component 2", verbose = getOption("verbose"), ...)
## Default S3 method: clusplot(x, clus, diss = FALSE, s.x.2d = mkCheckX(x, diss), stand = FALSE, lines = 2, shade = FALSE, color = FALSE, labels= 0, plotchar = TRUE, col.p = "dark green", col.txt = col.p, col.clus = if(color) c(2, 4, 6, 3) else 5, cex = 1, cex.txt = cex, span = TRUE, add = FALSE, xlim = NULL, ylim = NULL, main = paste("CLUSPLOT(", deparse1(substitute(x)),")"), sub = paste("These two components explain", round(100 * var.dec, digits = 2), "% of the point variability."), xlab = "Component 1", ylab = "Component 2", verbose = getOption("verbose"), ...)
x |
matrix or data frame, or dissimilarity matrix, depending on
the value of the In case of a matrix (alike), each row corresponds to an observation,
and each column corresponds to a variable. All variables must be
numeric. Missing values ( In case of a dissimilarity matrix, |
clus |
a vector of length n representing a clustering of |
diss |
logical indicating if |
s.x.2d |
a |
stand |
logical flag: if true, then the representations of the n observations in the 2-dimensional plot are standardized. |
lines |
integer out of In case E1 and E2 overlap on the line through
|
shade |
logical flag: if TRUE, then the ellipses are shaded in relation to their density. The density is the number of points in the cluster divided by the area of the ellipse. |
color |
logical flag: if TRUE, then the ellipses are colored with respect to their density. With increasing density, the colors are light blue, light green, red and purple. To see these colors on the graphics device, an appropriate color scheme should be selected (we recommend a white background). |
labels |
integer code, currently one of 0,1,2,3,4 and 5. If
The levels of the vector A possible |
plotchar |
logical flag: if TRUE, then the plotting symbols differ for points belonging to different clusters. |
span |
logical flag: if TRUE, then each cluster is represented by the ellipse with
smallest area containing all its points. (This is a special case of the
minimum volume ellipsoid.) There are also some special cases: When a cluster consists of only
one point, a tiny circle is drawn around it. When the points of a
cluster fall on a straight line, |
add |
logical indicating if ellipses (and labels if |
col.p |
color code(s) used for the observation points. |
col.txt |
color code(s) used for the labels (if |
col.clus |
color code for the ellipses (and their labels); only one if color is false (as per default). |
cex , cex.txt
|
character expansion (size), for the point symbols and point labels, respectively. |
xlim , ylim
|
numeric vectors of length 2, giving the x- and y-
ranges as in |
main |
main title for the plot; by default, one is constructed. |
sub |
sub title for the plot; by default, one is constructed. |
xlab , ylab
|
x- and y- axis labels for the plot, with defaults. |
verbose |
a logical indicating, if there should be extra diagnostic output; mainly for ‘debugging’. |
... |
Further graphical parameters may also be supplied, see
|
clusplot
uses function calls
princomp(*, cor = (ncol(x) > 2))
or
cmdscale(*, add=TRUE)
, respectively, depending on
diss
being false or true. These functions are data reduction
techniques to represent the data in a bivariate plot.
Ellipses are then drawn to indicate the clusters. The further layout of the plot is determined by the optional arguments.
An invisible list with components:
Distances |
When |
Shading |
A vector of length k (where k is the number of clusters), containing the
amount of shading per cluster. Let y be a vector where element i is the
ratio between the number of points in cluster i and the area of ellipse i.
When the cluster i is a line segment, y[i] and the density of the cluster are
set to |
a visual display of the clustering is plotted on the current graphics device.
When we have 4 or fewer clusters, then the color=TRUE
gives
every cluster a different color. When there are more than 4 clusters,
clusplot uses the function pam
to cluster the
densities into 4 groups such that ellipses with nearly the same
density get the same color. col.clus
specifies the colors used.
The col.p
and col.txt
arguments, added for R,
are recycled to have length the number of observations.
If col.p
has more than one value, using color = TRUE
can
be confusing because of a mix of point and ellipse colors.
Pison, G., Struyf, A. and Rousseeuw, P.J. (1999)
Displaying a Clustering with CLUSPLOT,
Computational Statistics and Data Analysis, 30, 381–392.
Kaufman, L. and Rousseeuw, P.J. (1990). Finding Groups in Data: An Introduction to Cluster Analysis. Wiley, New York.
Struyf, A., Hubert, M. and Rousseeuw, P.J. (1997). Integrating Robust Clustering Techniques in S-PLUS, Computational Statistics and Data Analysis, 26, 17-37.
princomp
, cmdscale
, pam
,
clara
, daisy
, par
,
identify
, cov.mve
,
clusplot.partition
.
## plotting votes.diss(dissimilarity) in a bivariate plot and ## partitioning into 2 clusters data(votes.repub) votes.diss <- daisy(votes.repub) pamv <- pam(votes.diss, 2, diss = TRUE) clusplot(pamv, shade = TRUE) ## is the same as votes.clus <- pamv$clustering clusplot(votes.diss, votes.clus, diss = TRUE, shade = TRUE) ## Now look at components 3 and 2 instead of 1 and 2: str(cMDS <- cmdscale(votes.diss, k=3, add=TRUE)) clusplot(pamv, s.x.2d = list(x=cMDS$points[, c(3,2)], labs=rownames(votes.repub), var.dec=NA), shade = TRUE, col.p = votes.clus, sub="", xlab = "Component 3", ylab = "Component 2") clusplot(pamv, col.p = votes.clus, labels = 4)# color points and label ellipses # "simple" cheap ellipses: larger than minimum volume: # here they are *added* to the previous plot: clusplot(pamv, span = FALSE, add = TRUE, col.clus = "midnightblue") ## Setting a small *label* size: clusplot(votes.diss, votes.clus, diss = TRUE, labels = 3, cex.txt = 0.6) if(dev.interactive()) { # uses identify() *interactively* : clusplot(votes.diss, votes.clus, diss = TRUE, shade = TRUE, labels = 1) clusplot(votes.diss, votes.clus, diss = TRUE, labels = 5)# ident. only points } ## plotting iris (data frame) in a 2-dimensional plot and partitioning ## into 3 clusters. data(iris) iris.x <- iris[, 1:4] cl3 <- pam(iris.x, 3)$clustering op <- par(mfrow= c(2,2)) clusplot(iris.x, cl3, color = TRUE) U <- par("usr") ## zoom in : rect(0,-1, 2,1, border = "orange", lwd=2) clusplot(iris.x, cl3, color = TRUE, xlim = c(0,2), ylim = c(-1,1)) box(col="orange",lwd=2); mtext("sub region", font = 4, cex = 2) ## or zoom out : clusplot(iris.x, cl3, color = TRUE, xlim = c(-4,4), ylim = c(-4,4)) mtext("'super' region", font = 4, cex = 2) rect(U[1],U[3], U[2],U[4], lwd=2, lty = 3) # reset graphics par(op)
## plotting votes.diss(dissimilarity) in a bivariate plot and ## partitioning into 2 clusters data(votes.repub) votes.diss <- daisy(votes.repub) pamv <- pam(votes.diss, 2, diss = TRUE) clusplot(pamv, shade = TRUE) ## is the same as votes.clus <- pamv$clustering clusplot(votes.diss, votes.clus, diss = TRUE, shade = TRUE) ## Now look at components 3 and 2 instead of 1 and 2: str(cMDS <- cmdscale(votes.diss, k=3, add=TRUE)) clusplot(pamv, s.x.2d = list(x=cMDS$points[, c(3,2)], labs=rownames(votes.repub), var.dec=NA), shade = TRUE, col.p = votes.clus, sub="", xlab = "Component 3", ylab = "Component 2") clusplot(pamv, col.p = votes.clus, labels = 4)# color points and label ellipses # "simple" cheap ellipses: larger than minimum volume: # here they are *added* to the previous plot: clusplot(pamv, span = FALSE, add = TRUE, col.clus = "midnightblue") ## Setting a small *label* size: clusplot(votes.diss, votes.clus, diss = TRUE, labels = 3, cex.txt = 0.6) if(dev.interactive()) { # uses identify() *interactively* : clusplot(votes.diss, votes.clus, diss = TRUE, shade = TRUE, labels = 1) clusplot(votes.diss, votes.clus, diss = TRUE, labels = 5)# ident. only points } ## plotting iris (data frame) in a 2-dimensional plot and partitioning ## into 3 clusters. data(iris) iris.x <- iris[, 1:4] cl3 <- pam(iris.x, 3)$clustering op <- par(mfrow= c(2,2)) clusplot(iris.x, cl3, color = TRUE) U <- par("usr") ## zoom in : rect(0,-1, 2,1, border = "orange", lwd=2) clusplot(iris.x, cl3, color = TRUE, xlim = c(0,2), ylim = c(-1,1)) box(col="orange",lwd=2); mtext("sub region", font = 4, cex = 2) ## or zoom out : clusplot(iris.x, cl3, color = TRUE, xlim = c(-4,4), ylim = c(-4,4)) mtext("'super' region", font = 4, cex = 2) rect(U[1],U[3], U[2],U[4], lwd=2, lty = 3) # reset graphics par(op)
Computes the “agglomerative coefficient” (aka “divisive
coefficient” for diana
), measuring the
clustering structure of the dataset.
For each observation i, denote by its dissimilarity to the
first cluster it is merged with, divided by the dissimilarity of the
merger in the final step of the algorithm. The agglomerative
coefficient is the average of all
. It can also be seen
as the average width (or the percentage filled) of the banner plot.
coefHier()
directly interfaces to the underlying C code, and
“proves” that only object$heights
is needed to
compute the coefficient.
Because it grows with the number of observations, this measure should not be used to compare datasets of very different sizes.
coefHier(object) coef.hclust(object, ...) ## S3 method for class 'hclust' coef(object, ...) ## S3 method for class 'twins' coef(object, ...)
coefHier(object) coef.hclust(object, ...) ## S3 method for class 'hclust' coef(object, ...) ## S3 method for class 'twins' coef(object, ...)
object |
an object of class Since For |
... |
currently unused potential further arguments |
a number specifying the agglomerative (or divisive for
diana
objects) coefficient as defined by Kaufman and Rousseeuw,
see agnes.object $ ac
or diana.object $ dc
.
data(agriculture) aa <- agnes(agriculture) coef(aa) # really just extracts aa$ac coef(as.hclust(aa))# recomputes coefHier(aa) # ditto
data(agriculture) aa <- agnes(agriculture) coef(aa) # really just extracts aa$ac coef(as.hclust(aa))# recomputes coefHier(aa) # ditto
Compute all the pairwise dissimilarities (distances) between observations
in the data set. The original variables may be of mixed types. In
that case, or whenever metric = "gower"
is set, a
generalization of Gower's formula is used, see ‘Details’
below.
daisy(x, metric = c("euclidean", "manhattan", "gower"), stand = FALSE, type = list(), weights = rep.int(1, p), warnBin = warnType, warnAsym = warnType, warnConst = warnType, warnType = TRUE)
daisy(x, metric = c("euclidean", "manhattan", "gower"), stand = FALSE, type = list(), weights = rep.int(1, p), warnBin = warnType, warnAsym = warnType, warnConst = warnType, warnType = TRUE)
x |
numeric matrix or data frame, of dimension |
metric |
character string specifying the metric to be used.
The currently available options are “Gower's distance” is chosen by metric |
stand |
logical flag: if TRUE, then the measurements in If not all columns of |
type |
list for specifying some (or all) of the types of the
variables (columns) in
Each component is a (character or numeric) vector, containing either
the names or the numbers of the corresponding columns of Variables not mentioned in |
weights |
an optional numeric vector of length |
warnBin , warnAsym , warnConst
|
logicals indicating if the corresponding type checking warnings should be signalled (when found). |
warnType |
logical indicating if all the type checking warnings should be active or not. |
The original version of daisy
is fully described in chapter 1
of Kaufman and Rousseeuw (1990).
Compared to dist
whose input must be numeric
variables, the main feature of daisy
is its ability to handle
other variable types as well (e.g. nominal, ordinal, (a)symmetric
binary) even when different types occur in the same data set.
The handling of nominal, ordinal, and (a)symmetric binary data is
achieved by using the general dissimilarity coefficient of Gower
(1971). If x
contains any columns of these
data-types, both arguments metric
and stand
will be
ignored and Gower's coefficient will be used as the metric. This can
also be activated for purely numeric data by metric = "gower"
.
With that, each variable (column) is first standardized by dividing
each entry by the range of the corresponding variable, after
subtracting the minimum value; consequently the rescaled variable has
range , exactly.
Note that setting the type to symm
(symmetric binary) gives the
same dissimilarities as using nominal (which is chosen for
non-ordered factors) only when no missing values are present, and more
efficiently.
Note that daisy
signals a warning when 2-valued numerical
variables do not have an explicit type
specified, because the
reference authors recommend to consider using "asymm"
; the
warning may be silenced by warnBin = FALSE
.
In the daisy
algorithm, missing values in a row of x are not
included in the dissimilarities involving that row. There are two
main cases,
If all variables are interval scaled (and metric
is
not "gower"
), the metric is "euclidean", and
is the number of columns in which
neither row i and j have NAs, then the dissimilarity d(i,j) returned is
(
ncol(x)) times the
Euclidean distance between the two vectors of length
shortened to exclude NAs. The rule is similar for the "manhattan"
metric, except that the coefficient is
. If
,
the dissimilarity is NA.
When some variables have a type other than interval scaled, or
if metric = "gower"
is specified, the
dissimilarity between two rows is the weighted mean of the contributions of
each variable. Specifically,
In other words, is a weighted mean of
with weights
,
where
= weigths[k]
,
is 0 or 1, and
, the k-th variable contribution to the
total distance, is a distance between
x[i,k]
and x[j,k]
,
see below.
The 0-1 weight becomes zero
when the variable
x[,k]
is missing in either or both rows
(i and j), or when the variable is asymmetric binary and both
values are zero. In all other situations it is 1.
The contribution of a nominal or binary variable to the total
dissimilarity is 0 if both values are equal, 1 otherwise.
The contribution of other variables is the absolute difference of
both values, divided by the total range of that variable. Note
that “standard scoring” is applied to ordinal variables,
i.e., they are replaced by their integer codes
1:K
. Note
that this is not the same as using their ranks (since there
typically are ties).
As the individual contributions are in
, the dissimilarity
will remain in
this range.
If all weights
are zero,
the dissimilarity is set to
NA
.
an object of class "dissimilarity"
containing the
dissimilarities among the rows of x
. This is typically the
input for the functions pam
, fanny
, agnes
or
diana
. For more details, see dissimilarity.object
.
Dissimilarities are used as inputs to cluster analysis and multidimensional scaling. The choice of metric may have a large impact.
Anja Struyf, Mia Hubert, and Peter and Rousseeuw, for the original
version.
Martin Maechler improved the NA
handling and
type
specification checking, and extended functionality to
metric = "gower"
and the optional weights
argument.
Gower, J. C. (1971) A general coefficient of similarity and some of its properties, Biometrics 27, 857–874.
Kaufman, L. and Rousseeuw, P.J. (1990) Finding Groups in Data: An Introduction to Cluster Analysis. Wiley, New York.
Struyf, A., Hubert, M. and Rousseeuw, P.J. (1997) Integrating Robust Clustering Techniques in S-PLUS, Computational Statistics and Data Analysis 26, 17–37.
dissimilarity.object
, dist
,
pam
, fanny
, clara
,
agnes
, diana
.
data(agriculture) ## Example 1 in ref: ## Dissimilarities using Euclidean metric and without standardization d.agr <- daisy(agriculture, metric = "euclidean", stand = FALSE) d.agr as.matrix(d.agr)[,"DK"] # via as.matrix.dist(.) ## compare with as.matrix(daisy(agriculture, metric = "gower")) ## Example 2 in reference, extended --- different ways of "mixed" / "gower": example(flower) # -> data(flower) *and* provide 'flowerN' summary(d0 <- daisy(flower)) # -> the first 3 {0,1} treated as *N*ominal summary(dS123 <- daisy(flower, type = list(symm = 1:3))) # first 3 treated as *S*ymmetric stopifnot(dS123 == d0) # i.e., *S*ymmetric <==> *N*ominal {for 2-level factor} summary(dNS123<- daisy(flowerN, type = list(symm = 1:3))) stopifnot(dS123 == d0) ## by default, however ... summary(dA123 <- daisy(flowerN)) # .. all 3 logicals treated *A*symmetric binary (w/ warning) summary(dA3 <- daisy(flower, type = list(asymm = 3))) summary(dA13 <- daisy(flower, type = list(asymm = c(1, 3), ordratio = 7))) ## Mixing variable *names* and column numbers (failed in the past): summary(dfl3 <- daisy(flower, type = list(asymm = c("V1", "V3"), symm= 2, ordratio= 7, logratio= 8))) ## If we'd treat the first 3 as simple {0,1} Nflow <- flower Nflow[,1:3] <- lapply(flower[,1:3], \(f) as.integer(as.character(f))) summary(dN <- daisy(Nflow)) # w/ warning: treated binary .. 1:3 as interval ## Still, using Euclidean/Manhattan distance for {0-1} *is* identical to treating them as "N" : stopifnot(dN == d0) stopifnot(dN == daisy(Nflow, type = list(symm = 1:3))) # or as "S"
data(agriculture) ## Example 1 in ref: ## Dissimilarities using Euclidean metric and without standardization d.agr <- daisy(agriculture, metric = "euclidean", stand = FALSE) d.agr as.matrix(d.agr)[,"DK"] # via as.matrix.dist(.) ## compare with as.matrix(daisy(agriculture, metric = "gower")) ## Example 2 in reference, extended --- different ways of "mixed" / "gower": example(flower) # -> data(flower) *and* provide 'flowerN' summary(d0 <- daisy(flower)) # -> the first 3 {0,1} treated as *N*ominal summary(dS123 <- daisy(flower, type = list(symm = 1:3))) # first 3 treated as *S*ymmetric stopifnot(dS123 == d0) # i.e., *S*ymmetric <==> *N*ominal {for 2-level factor} summary(dNS123<- daisy(flowerN, type = list(symm = 1:3))) stopifnot(dS123 == d0) ## by default, however ... summary(dA123 <- daisy(flowerN)) # .. all 3 logicals treated *A*symmetric binary (w/ warning) summary(dA3 <- daisy(flower, type = list(asymm = 3))) summary(dA13 <- daisy(flower, type = list(asymm = c(1, 3), ordratio = 7))) ## Mixing variable *names* and column numbers (failed in the past): summary(dfl3 <- daisy(flower, type = list(asymm = c("V1", "V3"), symm= 2, ordratio= 7, logratio= 8))) ## If we'd treat the first 3 as simple {0,1} Nflow <- flower Nflow[,1:3] <- lapply(flower[,1:3], \(f) as.integer(as.character(f))) summary(dN <- daisy(Nflow)) # w/ warning: treated binary .. 1:3 as interval ## Still, using Euclidean/Manhattan distance for {0-1} *is* identical to treating them as "N" : stopifnot(dN == d0) stopifnot(dN == daisy(Nflow, type = list(symm = 1:3))) # or as "S"
Computes a divisive hierarchical clustering of the dataset
returning an object of class diana
.
diana(x, diss = inherits(x, "dist"), metric = "euclidean", stand = FALSE, stop.at.k = FALSE, keep.diss = n < 100, keep.data = !diss, trace.lev = 0)
diana(x, diss = inherits(x, "dist"), metric = "euclidean", stand = FALSE, stop.at.k = FALSE, keep.diss = n < 100, keep.data = !diss, trace.lev = 0)
x |
data matrix or data frame, or dissimilarity matrix or object,
depending on the value of the In case of a matrix or data frame, each row corresponds to an observation,
and each column corresponds to a variable. All variables must be numeric.
Missing values ( In case of a dissimilarity matrix, |
diss |
logical flag: if TRUE (default for |
metric |
character string specifying the metric to be used for calculating
dissimilarities between observations. |
stand |
logical; if true, the measurements in |
stop.at.k |
logical or integer, |
keep.diss , keep.data
|
logicals indicating if the dissimilarities
and/or input data |
trace.lev |
integer specifying a trace level for printing
diagnostics during the algorithm. Default |
diana
is fully described in chapter 6 of Kaufman and Rousseeuw (1990).
It is probably unique in computing a divisive hierarchy, whereas most
other software for hierarchical clustering is agglomerative.
Moreover, diana
provides (a) the divisive coefficient
(see diana.object
) which measures the amount of clustering structure
found; and (b) the banner, a novel graphical display
(see plot.diana
).
The diana
-algorithm constructs a hierarchy of clusterings,
starting with one large
cluster containing all n observations. Clusters are divided until each cluster
contains only a single observation.
At each stage, the cluster with the largest diameter is selected.
(The diameter of a cluster is the largest dissimilarity between any
two of its observations.)
To divide the selected cluster, the algorithm first looks for its most
disparate observation (i.e., which has the largest average dissimilarity to the
other observations of the selected cluster). This observation initiates the
"splinter group". In subsequent steps, the algorithm reassigns observations
that are closer to the "splinter group" than to the "old party". The result
is a division of the selected cluster into two new clusters.
an object of class "diana"
representing the clustering;
this class has methods for the following generic functions:
print
, summary
, plot
.
Further, the class "diana"
inherits from
"twins"
. Therefore, the generic function pltree
can be
used on a diana
object, and as.hclust
and
as.dendrogram
methods are available.
A legitimate diana
object is a list with the following components:
order |
a vector giving a permutation of the original observations to allow for plotting, in the sense that the branches of a clustering tree will not cross. |
order.lab |
a vector similar to |
height |
a vector with the diameters of the clusters prior to splitting. |
dc |
the divisive coefficient, measuring the clustering structure of the
dataset. For each observation i, denote by |
merge |
an (n-1) by 2 matrix, where n is the number of
observations. Row i of |
diss |
an object of class |
data |
a matrix containing the original or standardized measurements, depending
on the |
agnes
also for background and references;
cutree
(and as.hclust
) for grouping
extraction; daisy
, dist
,
plot.diana
, twins.object
.
data(votes.repub) dv <- diana(votes.repub, metric = "manhattan", stand = TRUE) print(dv) plot(dv) ## Cut into 2 groups: dv2 <- cutree(as.hclust(dv), k = 2) table(dv2) # 8 and 42 group members rownames(votes.repub)[dv2 == 1] ## For two groups, does the metric matter ? dv0 <- diana(votes.repub, stand = TRUE) # default: Euclidean dv.2 <- cutree(as.hclust(dv0), k = 2) table(dv2 == dv.2)## identical group assignments str(as.dendrogram(dv0)) # {via as.dendrogram.twins() method} data(agriculture) ## Plot similar to Figure 8 in ref ## Not run: plot(diana(agriculture), ask = TRUE)
data(votes.repub) dv <- diana(votes.repub, metric = "manhattan", stand = TRUE) print(dv) plot(dv) ## Cut into 2 groups: dv2 <- cutree(as.hclust(dv), k = 2) table(dv2) # 8 and 42 group members rownames(votes.repub)[dv2 == 1] ## For two groups, does the metric matter ? dv0 <- diana(votes.repub, stand = TRUE) # default: Euclidean dv.2 <- cutree(as.hclust(dv0), k = 2) table(dv2 == dv.2)## identical group assignments str(as.dendrogram(dv0)) # {via as.dendrogram.twins() method} data(agriculture) ## Plot similar to Figure 8 in ref ## Not run: plot(diana(agriculture), ask = TRUE)
Objects of class "dissimilarity"
representing the dissimilarity
matrix of a dataset.
The dissimilarity matrix is symmetric, and hence its lower triangle
(column wise) is represented as a vector to save storage space.
If the object, is called do
, and n
the number of
observations, i.e., n <- attr(do, "Size")
, then
for , the dissimilarity between (row) i and j is
do[n*(i-1) - i*(i-1)/2 + j-i]
.
The length of the vector is , i.e., of order
.
"dissimilarity"
objects also inherit from class
dist
and can use dist
methods, in
particular, as.matrix
, such that
from above is just
as.matrix(do)[i,j]
.
The object has the following attributes:
Size |
the number of observations in the dataset. |
Metric |
the metric used for calculating the dissimilarities. Possible values are "euclidean", "manhattan", "mixed" (if variables of different types were present in the dataset), and "unspecified". |
Labels |
optionally, contains the labels, if any, of the observations of the dataset. |
NA.message |
optionally, if a dissimilarity could not be computed, because of too many missing values for some observations of the dataset. |
Types |
when a mixed metric was used, the types for each
variable as one-letter codes, see also
|
daisy
returns this class of objects.
Also the functions pam
, clara
, fanny
,
agnes
, and diana
return a dissimilarity
object,
as one component of their return objects.
The "dissimilarity"
class has methods for the following generic
functions: print
, summary
.
daisy
, dist
,
pam
, clara
, fanny
,
agnes
, diana
.
Compute the “ellipsoid hull” or “spanning ellipsoid”, i.e. the ellipsoid of minimal volume (‘area’ in 2D) such that all given points lie just inside or on the boundary of the ellipsoid.
ellipsoidhull(x, tol=0.01, maxit=5000, ret.wt = FALSE, ret.sqdist = FALSE, ret.pr = FALSE) ## S3 method for class 'ellipsoid' print(x, digits = max(1, getOption("digits") - 2), ...)
ellipsoidhull(x, tol=0.01, maxit=5000, ret.wt = FALSE, ret.sqdist = FALSE, ret.pr = FALSE) ## S3 method for class 'ellipsoid' print(x, digits = max(1, getOption("digits") - 2), ...)
x |
the |
tol |
convergence tolerance for Titterington's algorithm.
Setting this to much smaller values may drastically increase the number of
iterations needed, and you may want to increas |
maxit |
integer giving the maximal number of iteration steps for the algorithm. |
ret.wt , ret.sqdist , ret.pr
|
logicals indicating if additional
information should be returned, |
digits , ...
|
the usual arguments to |
The “spanning ellipsoid” algorithm is said to stem from
Titterington(1976), in Pison et al (1999) who use it for
clusplot.default
.
The problem can be seen as a special case of the “Min.Vol.”
ellipsoid of which a more more flexible and general implementation is
cov.mve
in the MASS
package.
an object of class "ellipsoid"
, basically a list
with several components, comprising at least
cov |
|
loc |
|
d2 |
average squared radius. Further, |
wt |
the vector of weights iff |
sqdist |
the vector of squared distances iff |
prob |
the vector of algorithm probabilities iff |
it |
number of iterations used. |
tol , maxit
|
just the input argument, see above. |
eps |
the achieved tolerance which is the maximal squared radius
minus |
ierr |
error code as from the algorithm; |
conv |
logical indicating if the converged. This is defined as
|
Martin Maechler did the present class implementation; Rousseeuw et al did the underlying original code.
Pison, G., Struyf, A. and Rousseeuw, P.J. (1999)
Displaying a Clustering with CLUSPLOT,
Computational Statistics and Data Analysis, 30, 381–392.
D.M. Titterington (1976) Algorithms for computing D-optimal design on finite design spaces. In Proc.\ of the 1976 Conf.\ on Information Science and Systems, 213–216; John Hopkins University.
predict.ellipsoid
which is also the
predict
method for ellipsoid
objects.
volume.ellipsoid
for an example of ‘manual’
ellipsoid
object construction;
further ellipse
from package ellipse
and ellipsePoints
from package sfsmisc.
chull
for the convex hull,
clusplot
which makes use of this; cov.mve
.
x <- rnorm(100) xy <- unname(cbind(x, rnorm(100) + 2*x + 10)) exy. <- ellipsoidhull(xy) exy. # >> calling print.ellipsoid() plot(xy, main = "ellipsoidhull(<Gauss data>) -- 'spanning points'") lines(predict(exy.), col="blue") points(rbind(exy.$loc), col = "red", cex = 3, pch = 13) exy <- ellipsoidhull(xy, tol = 1e-7, ret.wt = TRUE, ret.sq = TRUE) str(exy) # had small 'tol', hence many iterations (ii <- which(zapsmall(exy $ wt) > 1e-6)) ## --> only about 4 to 6 "spanning ellipsoid" points round(exy$wt[ii],3); sum(exy$wt[ii]) # weights summing to 1 points(xy[ii,], pch = 21, cex = 2, col="blue", bg = adjustcolor("blue",0.25))
x <- rnorm(100) xy <- unname(cbind(x, rnorm(100) + 2*x + 10)) exy. <- ellipsoidhull(xy) exy. # >> calling print.ellipsoid() plot(xy, main = "ellipsoidhull(<Gauss data>) -- 'spanning points'") lines(predict(exy.), col="blue") points(rbind(exy.$loc), col = "red", cex = 3, pch = 13) exy <- ellipsoidhull(xy, tol = 1e-7, ret.wt = TRUE, ret.sq = TRUE) str(exy) # had small 'tol', hence many iterations (ii <- which(zapsmall(exy $ wt) > 1e-6)) ## --> only about 4 to 6 "spanning ellipsoid" points round(exy$wt[ii],3); sum(exy$wt[ii]) # weights summing to 1 points(xy[ii,], pch = 21, cex = 2, col="blue", bg = adjustcolor("blue",0.25))
Computes a fuzzy clustering of the data into k
clusters.
fanny(x, k, diss = inherits(x, "dist"), memb.exp = 2, metric = c("euclidean", "manhattan", "SqEuclidean"), stand = FALSE, iniMem.p = NULL, cluster.only = FALSE, keep.diss = !diss && !cluster.only && n < 100, keep.data = !diss && !cluster.only, maxit = 500, tol = 1e-15, trace.lev = 0)
fanny(x, k, diss = inherits(x, "dist"), memb.exp = 2, metric = c("euclidean", "manhattan", "SqEuclidean"), stand = FALSE, iniMem.p = NULL, cluster.only = FALSE, keep.diss = !diss && !cluster.only && n < 100, keep.data = !diss && !cluster.only, maxit = 500, tol = 1e-15, trace.lev = 0)
x |
data matrix or data frame, or dissimilarity matrix, depending on the
value of the In case of a matrix or data frame, each row corresponds to an observation, and each column corresponds to a variable. All variables must be numeric. Missing values (NAs) are allowed. In case of a dissimilarity matrix, |
k |
integer giving the desired number of clusters. It is
required that |
diss |
logical flag: if TRUE (default for |
memb.exp |
number |
metric |
character string specifying the metric to be used for
calculating dissimilarities between observations. Options are
|
stand |
logical; if true, the measurements in |
iniMem.p |
numeric |
cluster.only |
logical; if true, no silhouette information will be computed and returned, see details. |
keep.diss , keep.data
|
logicals indicating if the dissimilarities
and/or input data |
maxit , tol
|
maximal number of iterations and default tolerance
for convergence (relative convergence of the fit criterion) for the
FANNY algorithm. The defaults |
trace.lev |
integer specifying a trace level for printing
diagnostics during the C-internal algorithm.
Default |
In a fuzzy clustering, each observation is “spread out” over
the various clusters. Denote by the membership
of observation
to cluster
.
The memberships are nonnegative, and for a fixed observation i they sum to 1.
The particular method fanny
stems from chapter 4 of
Kaufman and Rousseeuw (1990) (see the references in
daisy
) and has been extended by Martin Maechler to allow
user specified memb.exp
, iniMem.p
, maxit
,
tol
, etc.
Fanny aims to minimize the objective function
where is the number of observations,
is the number of
clusters,
is the membership exponent
memb.exp
and
is the dissimilarity between observations
and
.
Note that gives increasingly crisper
clusterings whereas
leads to complete
fuzzyness. K&R(1990), p.191 note that values too close to 1 can lead
to slow convergence. Further note that even the default,
can lead to complete fuzzyness, i.e., memberships
. In that case a warning is signalled and the
user is advised to chose a smaller
memb.exp
().
Compared to other fuzzy clustering methods, fanny
has the following
features: (a) it also accepts a dissimilarity matrix; (b) it is
more robust to the spherical cluster
assumption; (c) it provides
a novel graphical display, the silhouette plot (see
plot.partition
).
an object of class "fanny"
representing the clustering.
See fanny.object
for details.
agnes
for background and references;
fanny.object
, partition.object
,
plot.partition
, daisy
, dist
.
## generate 10+15 objects in two clusters, plus 3 objects lying ## between those clusters. x <- rbind(cbind(rnorm(10, 0, 0.5), rnorm(10, 0, 0.5)), cbind(rnorm(15, 5, 0.5), rnorm(15, 5, 0.5)), cbind(rnorm( 3,3.2,0.5), rnorm( 3,3.2,0.5))) fannyx <- fanny(x, 2) ## Note that observations 26:28 are "fuzzy" (closer to # 2): fannyx summary(fannyx) plot(fannyx) (fan.x.15 <- fanny(x, 2, memb.exp = 1.5)) # 'crispier' for obs. 26:28 (fanny(x, 2, memb.exp = 3)) # more fuzzy in general data(ruspini) f4 <- fanny(ruspini, 4) stopifnot(rle(f4$clustering)$lengths == c(20,23,17,15)) plot(f4, which = 1) ## Plot similar to Figure 6 in Stryuf et al (1996) plot(fanny(ruspini, 5))
## generate 10+15 objects in two clusters, plus 3 objects lying ## between those clusters. x <- rbind(cbind(rnorm(10, 0, 0.5), rnorm(10, 0, 0.5)), cbind(rnorm(15, 5, 0.5), rnorm(15, 5, 0.5)), cbind(rnorm( 3,3.2,0.5), rnorm( 3,3.2,0.5))) fannyx <- fanny(x, 2) ## Note that observations 26:28 are "fuzzy" (closer to # 2): fannyx summary(fannyx) plot(fannyx) (fan.x.15 <- fanny(x, 2, memb.exp = 1.5)) # 'crispier' for obs. 26:28 (fanny(x, 2, memb.exp = 3)) # more fuzzy in general data(ruspini) f4 <- fanny(ruspini, 4) stopifnot(rle(f4$clustering)$lengths == c(20,23,17,15)) plot(f4, which = 1) ## Plot similar to Figure 6 in Stryuf et al (1996) plot(fanny(ruspini, 5))
The objects of class "fanny"
represent a fuzzy clustering of a
dataset.
A legitimate fanny
object is a list with the following components:
membership |
matrix containing the memberships for each pair consisting of an observation and a cluster. |
memb.exp |
the membership exponent used in the fitting criterion. |
coeff |
Dunn's partition coefficient The normalized form of the coefficient is also given. It is defined
as |
clustering |
the clustering vector of the nearest crisp clustering, see
|
k.crisp |
integer ( |
objective |
named vector containing the minimal value of the objective function
reached by the FANNY algorithm and the relative convergence
tolerance |
convergence |
named vector with |
diss |
an object of class |
call |
generating call, see |
silinfo |
list with silhouette information of the nearest crisp clustering, see
|
data |
matrix, possibibly standardized, or NULL, see
|
These objects are returned from fanny
.
The "fanny"
class has methods for the following generic functions:
print
, summary
.
The class "fanny"
inherits from "partition"
.
Therefore, the generic functions plot
and clusplot
can
be used on a fanny
object.
fanny
, print.fanny
,
dissimilarity.object
,
partition.object
, plot.partition
.
8 characteristics for 18 popular flowers.
data(flower)
data(flower)
A data frame with 18 observations on 8 variables:
[ , "V1"] | factor | winters |
[ , "V2"] | factor | shadow |
[ , "V3"] | factor | tubers |
[ , "V4"] | factor | color |
[ , "V5"] | ordered | soil |
[ , "V6"] | ordered | preference |
[ , "V7"] | numeric | height |
[ , "V8"] | numeric | distance |
winters, is binary and indicates whether the plant may be left in the garden when it freezes.
shadow, is binary and shows whether the plant needs to stand in the shadow.
tubers, is asymmetric binary and distinguishes between plants with tubers and plants that grow in any other way.
color, is nominal and specifies the flower's color (1 = white, 2 = yellow, 3 = pink, 4 = red, 5 = blue).
soil, is ordinal and indicates whether the plant grows in dry (1), normal (2), or wet (3) soil.
preference, is ordinal and gives someone's preference ranking going from 1 to 18.
height, is interval scaled, the plant's height in centimeters.
distance, is interval scaled, the distance in centimeters that should be left between the plants.
Struyf, Hubert and Rousseeuw (1996), see agnes
.
data(flower) str(flower) # factors, ordered, numeric ## "Nicer" version (less numeric more self explainable) of 'flower': flowerN <- flower colnames(flowerN) <- c("winters", "shadow", "tubers", "color", "soil", "preference", "height", "distance") for(j in 1:3) flowerN[,j] <- (flowerN[,j] == "1") levels(flowerN$color) <- c("1" = "white", "2" = "yellow", "3" = "pink", "4" = "red", "5" = "blue")[levels(flowerN$color)] levels(flowerN$soil) <- c("1" = "dry", "2" = "normal", "3" = "wet")[levels(flowerN$soil)] flowerN ## ==> example(daisy) on how it is used
data(flower) str(flower) # factors, ordered, numeric ## "Nicer" version (less numeric more self explainable) of 'flower': flowerN <- flower colnames(flowerN) <- c("winters", "shadow", "tubers", "color", "soil", "preference", "height", "distance") for(j in 1:3) flowerN[,j] <- (flowerN[,j] == "1") levels(flowerN$color) <- c("1" = "white", "2" = "yellow", "3" = "pink", "4" = "red", "5" = "blue")[levels(flowerN$color)] levels(flowerN$soil) <- c("1" = "dry", "2" = "normal", "3" = "wet")[levels(flowerN$soil)] flowerN ## ==> example(daisy) on how it is used
Compute index vectors for extracting or reordering of lower or upper triangular matrices that are stored as contiguous vectors.
lower.to.upper.tri.inds(n) upper.to.lower.tri.inds(n)
lower.to.upper.tri.inds(n) upper.to.lower.tri.inds(n)
n |
integer larger than 1. |
integer vector containing a permutation of 1:N
where
.
upper.tri
, lower.tri
with a related
purpose.
m5 <- matrix(NA,5,5) m <- m5; m[lower.tri(m)] <- upper.to.lower.tri.inds(5); m m <- m5; m[upper.tri(m)] <- lower.to.upper.tri.inds(5); m stopifnot(lower.to.upper.tri.inds(2) == 1, lower.to.upper.tri.inds(3) == 1:3, upper.to.lower.tri.inds(3) == 1:3, sort(upper.to.lower.tri.inds(5)) == 1:10, sort(lower.to.upper.tri.inds(6)) == 1:15)
m5 <- matrix(NA,5,5) m <- m5; m[lower.tri(m)] <- upper.to.lower.tri.inds(5); m m <- m5; m[upper.tri(m)] <- lower.to.upper.tri.inds(5); m stopifnot(lower.to.upper.tri.inds(2) == 1, lower.to.upper.tri.inds(3) == 1:3, upper.to.lower.tri.inds(3) == 1:3, sort(upper.to.lower.tri.inds(5)) == 1:10, sort(lower.to.upper.tri.inds(6)) == 1:15)
pam
-consistent Medoids from ClusteringGiven a data matrix or dissimilarity x
for say
observational units and a clustering,
compute the
pam()
-consistent medoids.
medoids(x, clustering, diss = inherits(x, "dist"), USE.NAMES = FALSE, ...)
medoids(x, clustering, diss = inherits(x, "dist"), USE.NAMES = FALSE, ...)
x |
Either a data matrix or data frame, or dissimilarity matrix or
object, see also |
clustering |
an integer vector of length |
diss |
see also |
USE.NAMES |
a logical, typical false, passed to the
|
... |
optional further argument passed to |
a numeric vector of length
Martin Maechler, after being asked how pam()
could be used
instead of kmeans()
, starting from a previous clustering.
pam
, kmeans
.
Further, cutree()
and agnes
(or hclust
).
## From example(agnes): data(votes.repub) agn1 <- agnes(votes.repub, metric = "manhattan", stand = TRUE) agn2 <- agnes(daisy(votes.repub), diss = TRUE, method = "complete") agnS <- agnes(votes.repub, method = "flexible", par.meth = 0.625) for(k in 2:11) { print(table(cl.k <- cutree(agnS, k=k))) stopifnot(length(cl.k) == nrow(votes.repub), 1 <= cl.k, cl.k <= k, table(cl.k) >= 2) m.k <- medoids(votes.repub, cl.k) cat("k =", k,"; sort(medoids) = "); dput(sort(m.k), control={}) }
## From example(agnes): data(votes.repub) agn1 <- agnes(votes.repub, metric = "manhattan", stand = TRUE) agn2 <- agnes(daisy(votes.repub), diss = TRUE, method = "complete") agnS <- agnes(votes.repub, method = "flexible", par.meth = 0.625) for(k in 2:11) { print(table(cl.k <- cutree(agnS, k=k))) stopifnot(length(cl.k) == nrow(votes.repub), 1 <= cl.k, cl.k <= k, table(cl.k) >= 2) m.k <- medoids(votes.repub, cl.k) cat("k =", k,"; sort(medoids) = "); dput(sort(m.k), control={}) }
Returns a list representing a divisive hierarchical clustering of a dataset with binary variables only.
mona(x, trace.lev = 0)
mona(x, trace.lev = 0)
x |
data matrix or data frame in which each row corresponds to an
observation, and each column corresponds to a variable. All
variables must be binary. A limited number of missing values ( |
trace.lev |
logical or integer indicating if (and how much) the algorithm should produce progress output. |
mona
is fully described in chapter 7 of Kaufman and Rousseeuw (1990).
It is “monothetic” in the sense that each division is based on a
single (well-chosen) variable, whereas most other hierarchical methods
(including agnes
and diana
) are “polythetic”, i.e. they use
all variables together.
The mona
-algorithm constructs a hierarchy of clusterings,
starting with one large cluster. Clusters are divided until all
observations in the same cluster have identical values for all variables.
At each stage, all clusters are divided according to the values of one
variable. A cluster is divided into one cluster with all observations having
value 1 for that variable, and another cluster with all observations having
value 0 for that variable.
The variable used for splitting a cluster is the variable with the maximal total association to the other variables, according to the observations in the cluster to be splitted. The association between variables f and g is given by a(f,g)*d(f,g) - b(f,g)*c(f,g), where a(f,g), b(f,g), c(f,g), and d(f,g) are the numbers in the contingency table of f and g. [That is, a(f,g) (resp. d(f,g)) is the number of observations for which f and g both have value 0 (resp. value 1); b(f,g) (resp. c(f,g)) is the number of observations for which f has value 0 (resp. 1) and g has value 1 (resp. 0).] The total association of a variable f is the sum of its associations to all variables.
an object of class "mona"
representing the clustering.
See mona.object
for details.
NA
s)The mona-algorithm requires “pure” 0-1 values. However,
mona(x)
allows x
to contain (not too many)
NA
s. In a preliminary step, these are “imputed”,
i.e., all missing values are filled in. To do this, the same measure
of association between variables is used as in the algorithm. When variable
f has missing values, the variable g with the largest absolute association
to f is looked up. When the association between f and g is positive,
any missing value of f is replaced by the value of g for the same
observation. If the association between f and g is negative, then any missing
value of f is replaced by the value of 1-g for the same
observation.
In cluster versions before 2.0.6, the algorithm entered an
infinite loop in the boundary case of one variable, i.e.,
ncol(x) == 1
, which currently signals an error (because the
algorithm now in C, haes not correctly taken account of this special case).
agnes
for background and references;
mona.object
, plot.mona
.
data(animals) ma <- mona(animals) ma ## Plot similar to Figure 10 in Struyf et al (1996) plot(ma) ## One place to see if/how error messages are *translated* (to 'de' / 'pl'): ani.NA <- animals; ani.NA[4,] <- NA aniNA <- within(animals, { end[2:9] <- NA }) aniN2 <- animals; aniN2[cbind(1:6, c(3, 1, 4:6, 2))] <- NA ani.non2 <- within(animals, end[7] <- 3 ) ani.idNA <- within(animals, end[!is.na(end)] <- 1 ) try( mona(ani.NA) ) ## error: .. object with all values missing try( mona(aniNA) ) ## error: .. more than half missing values try( mona(aniN2) ) ## error: all have at least one missing try( mona(ani.non2) ) ## error: all must be binary try( mona(ani.idNA) ) ## error: ditto
data(animals) ma <- mona(animals) ma ## Plot similar to Figure 10 in Struyf et al (1996) plot(ma) ## One place to see if/how error messages are *translated* (to 'de' / 'pl'): ani.NA <- animals; ani.NA[4,] <- NA aniNA <- within(animals, { end[2:9] <- NA }) aniN2 <- animals; aniN2[cbind(1:6, c(3, 1, 4:6, 2))] <- NA ani.non2 <- within(animals, end[7] <- 3 ) ani.idNA <- within(animals, end[!is.na(end)] <- 1 ) try( mona(ani.NA) ) ## error: .. object with all values missing try( mona(aniNA) ) ## error: .. more than half missing values try( mona(aniN2) ) ## error: all have at least one missing try( mona(ani.non2) ) ## error: all must be binary try( mona(ani.idNA) ) ## error: ditto
The objects of class "mona"
represent the divisive
hierarchical clustering of a dataset with only binary variables
(measurements). This class of objects is returned from
mona
.
A legitimate mona
object is a list with the following components:
data |
matrix with the same dimensions as the original data matrix, but with factors coded as 0 and 1, and all missing values replaced. |
order |
a vector giving a permutation of the original observations to allow for plotting, in the sense that the branches of a clustering tree will not cross. |
order.lab |
a vector similar to |
variable |
vector of length n-1 where n is the number of observations,
specifying the variables used to separate the observations of |
step |
vector of length n-1 where n is the number of observations,
specifying the separation steps at which the observations of
|
The "mona"
class has methods for the following generic functions:
print
, summary
, plot
.
mona
for examples etc, plot.mona
.
Partitioning (clustering) of the data into k
clusters “around
medoids”, a more robust version of K-means.
pam(x, k, diss = inherits(x, "dist"), metric = c("euclidean", "manhattan"), medoids = if(is.numeric(nstart)) "random", nstart = if(variant == "faster") 1 else NA, stand = FALSE, cluster.only = FALSE, do.swap = TRUE, keep.diss = !diss && !cluster.only && n < 100, keep.data = !diss && !cluster.only, variant = c("original", "o_1", "o_2", "f_3", "f_4", "f_5", "faster"), pamonce = FALSE, trace.lev = 0)
pam(x, k, diss = inherits(x, "dist"), metric = c("euclidean", "manhattan"), medoids = if(is.numeric(nstart)) "random", nstart = if(variant == "faster") 1 else NA, stand = FALSE, cluster.only = FALSE, do.swap = TRUE, keep.diss = !diss && !cluster.only && n < 100, keep.data = !diss && !cluster.only, variant = c("original", "o_1", "o_2", "f_3", "f_4", "f_5", "faster"), pamonce = FALSE, trace.lev = 0)
x |
data matrix or data frame, or dissimilarity matrix or object,
depending on the value of the In case of a matrix or data frame, each row corresponds to an
observation, and each column corresponds to a variable. All
variables must be numeric (or logical). Missing values ( In case of a dissimilarity matrix, |
k |
positive integer specifying the number of clusters, less than the number of observations. |
diss |
logical flag: if TRUE (default for |
metric |
character string specifying the metric to be used for calculating
dissimilarities between observations. |
medoids |
NULL (default) or length- |
nstart |
used only when |
stand |
logical; if true, the measurements in |
cluster.only |
logical; if true, only the clustering will be computed and returned, see details. |
do.swap |
logical indicating if the swap phase should
happen. The default, |
keep.diss , keep.data
|
logicals indicating if the dissimilarities
and/or input data |
pamonce |
logical or integer in |
variant |
a |
trace.lev |
integer specifying a trace level for printing
diagnostics during the build and swap phase of the algorithm.
Default |
The basic pam
algorithm is fully described in chapter 2 of
Kaufman and Rousseeuw(1990). Compared to the k-means approach in kmeans
, the
function pam
has the following features: (a) it also accepts a
dissimilarity matrix; (b) it is more robust because it minimizes a sum
of dissimilarities instead of a sum of squared euclidean distances;
(c) it provides a novel graphical display, the silhouette plot (see
plot.partition
) (d) it allows to select the number of clusters
using mean(silhouette(pr)[, "sil_width"])
on the result
pr <- pam(..)
, or directly its component
pr$silinfo$avg.width
, see also pam.object
.
When cluster.only
is true, the result is simply a (possibly
named) integer vector specifying the clustering, i.e.,pam(x,k, cluster.only=TRUE)
is the same as pam(x,k)$clustering
but computed more efficiently.
The pam
-algorithm is based on the search for k
representative objects or medoids among the observations of the
dataset. These observations should represent the structure of the
data. After finding a set of k
medoids, k
clusters are
constructed by assigning each observation to the nearest medoid. The
goal is to find k
representative objects which minimize the sum
of the dissimilarities of the observations to their closest
representative object.
By default, when medoids
are not specified, the algorithm first
looks for a good initial set of medoids (this is called the
build phase). Then it finds a local minimum for the
objective function, that is, a solution such that there is no single
switch of an observation with a medoid (i.e. a ‘swap’) that will
decrease the objective (this is called the swap phase).
When the medoids
are specified (or randomly generated), their order does not
matter; in general, the algorithms have been designed to not depend on
the order of the observations.
The pamonce
option, new in cluster 1.14.2 (Jan. 2012), has been
proposed by Matthias Studer, University of Geneva, based on the
findings by Reynolds et al. (2006) and was extended by Erich Schubert,
TU Dortmund, with the FastPAM optimizations.
The default FALSE
(or integer 0
) corresponds to the
original “swap” algorithm, whereas pamonce = 1
(or
TRUE
), corresponds to the first proposal ....
and pamonce = 2
additionally implements the second proposal as
well.
The key ideas of ‘FastPAM’ (Schubert and Rousseeuw, 2019) are implemented except for the linear approximate build as follows:
pamonce = 3
:reduces the runtime by a factor of O(k) by exploiting that points cannot be closest to all current medoids at the same time.
pamonce = 4
:additionally allows executing multiple swaps per iteration, usually reducing the number of iterations.
pamonce = 5
: adds minor optimizations copied from the
pamonce = 2
approach, and is expected to be the fastest of the
‘FastPam’ variants included.
‘FasterPAM’ (Schubert and Rousseeuw, 2021) is implemented via
pamonce = 6
:execute each swap which improves results
immediately, and hence typically multiple swaps per iteration;
this swapping algorithm runs in rather than
time which is much faster for all but small
.
In addition, ‘FasterPAM’ uses random initialization of the
medoids (instead of the ‘build’ phase) to avoid the
initialization cost of the build algorithm. In particular
for large k, this yields a much faster algorithm, while preserving a
similar result quality.
One may decide to use repeated random initialization by setting
nstart > 1
.
an object of class "pam"
representing the clustering. See
?pam.object
for details.
For large datasets, pam
may need too much memory or too much
computation time since both are . Then,
clara()
is preferable, see its documentation.
There is hard limit currently, , at
because for larger
,
is larger than
the maximal integer (
.Machine$integer.max
= ).
Kaufman and Rousseeuw's orginal Fortran code was translated to C
and augmented in several ways, e.g. to allow cluster.only=TRUE
or do.swap=FALSE
, by Martin Maechler.
Matthias Studer, Univ.Geneva provided the pamonce
(1
and 2
)
implementation.
Erich Schubert, TU Dortmund contributed the pamonce
(3
to 6
)
implementation.
Reynolds, A., Richards, G., de la Iglesia, B. and Rayward-Smith, V. (1992) Clustering rules: A comparison of partitioning and hierarchical clustering algorithms; Journal of Mathematical Modelling and Algorithms 5, 475–504. doi:10.1007/s10852-005-9022-1.
Erich Schubert and Peter J. Rousseeuw (2019) Faster k-Medoids Clustering: Improving the PAM, CLARA, and CLARANS Algorithms; SISAP 2020, 171–187. doi:10.1007/978-3-030-32047-8_16.
Erich Schubert and Peter J. Rousseeuw (2021) Fast and Eager k-Medoids Clustering: O(k) Runtime Improvement of the PAM, CLARA, and CLARANS Algorithms; Preprint, to appear in Information Systems (https://arxiv.org/abs/2008.05171).
agnes
for background and references;
pam.object
, clara
, daisy
,
partition.object
, plot.partition
,
dist
.
## generate 25 objects, divided into 2 clusters. x <- rbind(cbind(rnorm(10,0,0.5), rnorm(10,0,0.5)), cbind(rnorm(15,5,0.5), rnorm(15,5,0.5))) pamx <- pam(x, 2) pamx # Medoids: '7' and '25' ... summary(pamx) plot(pamx) ## use obs. 1 & 16 as starting medoids -- same result (typically) (p2m <- pam(x, 2, medoids = c(1,16))) ## no _build_ *and* no _swap_ phase: just cluster all obs. around (1, 16): p2.s <- pam(x, 2, medoids = c(1,16), do.swap = FALSE) p2.s p3m <- pam(x, 3, trace = 2) ## rather stupid initial medoids: (p3m. <- pam(x, 3, medoids = 3:1, trace = 1)) pam(daisy(x, metric = "manhattan"), 2, diss = TRUE) data(ruspini) ## Plot similar to Figure 4 in Stryuf et al (1996) ## Not run: plot(pam(ruspini, 4), ask = TRUE)
## generate 25 objects, divided into 2 clusters. x <- rbind(cbind(rnorm(10,0,0.5), rnorm(10,0,0.5)), cbind(rnorm(15,5,0.5), rnorm(15,5,0.5))) pamx <- pam(x, 2) pamx # Medoids: '7' and '25' ... summary(pamx) plot(pamx) ## use obs. 1 & 16 as starting medoids -- same result (typically) (p2m <- pam(x, 2, medoids = c(1,16))) ## no _build_ *and* no _swap_ phase: just cluster all obs. around (1, 16): p2.s <- pam(x, 2, medoids = c(1,16), do.swap = FALSE) p2.s p3m <- pam(x, 3, trace = 2) ## rather stupid initial medoids: (p3m. <- pam(x, 3, medoids = 3:1, trace = 1)) pam(daisy(x, metric = "manhattan"), 2, diss = TRUE) data(ruspini) ## Plot similar to Figure 4 in Stryuf et al (1996) ## Not run: plot(pam(ruspini, 4), ask = TRUE)
The objects of class "pam"
represent a partitioning of a
dataset into clusters.
A legitimate pam
object is a list
with the following components:
medoids |
the medoids or representative objects of the
clusters. If a dissimilarity matrix was given as input to
|
id.med |
integer vector of indices giving the medoid observation numbers. |
clustering |
the clustering vector, see |
objective |
the objective function after the first and second
step of the |
isolation |
vector with length equal to the number of clusters, specifying which
clusters are isolated clusters (L- or L*-clusters) and which clusters are
not isolated. |
clusinfo |
matrix, each row gives numerical information for one cluster. These are the cardinality of the cluster (number of observations), the maximal and average dissimilarity between the observations in the cluster and the cluster's medoid, the diameter of the cluster (maximal dissimilarity between two observations of the cluster), and the separation of the cluster (minimal dissimilarity between an observation of the cluster and an observation of another cluster). |
silinfo |
list with silhouette width information, see
|
diss |
dissimilarity (maybe NULL), see |
call |
generating call, see |
data |
(possibibly standardized) see |
These objects are returned from pam
.
The "pam"
class has methods for the following generic functions:
print
, summary
.
The class "pam"
inherits from "partition"
.
Therefore, the generic functions plot
and clusplot
can
be used on a pam
object.
pam
, dissimilarity.object
,
partition.object
, plot.partition
.
## Use the silhouette widths for assessing the best number of clusters, ## following a one-dimensional example from Christian Hennig : ## x <- c(rnorm(50), rnorm(50,mean=5), rnorm(30,mean=15)) asw <- numeric(20) ## Note that "k=1" won't work! for (k in 2:20) asw[k] <- pam(x, k) $ silinfo $ avg.width k.best <- which.max(asw) cat("silhouette-optimal number of clusters:", k.best, "\n") plot(1:20, asw, type= "h", main = "pam() clustering assessment", xlab= "k (# clusters)", ylab = "average silhouette width") axis(1, k.best, paste("best",k.best,sep="\n"), col = "red", col.axis = "red")
## Use the silhouette widths for assessing the best number of clusters, ## following a one-dimensional example from Christian Hennig : ## x <- c(rnorm(50), rnorm(50,mean=5), rnorm(30,mean=15)) asw <- numeric(20) ## Note that "k=1" won't work! for (k in 2:20) asw[k] <- pam(x, k) $ silinfo $ avg.width k.best <- which.max(asw) cat("silhouette-optimal number of clusters:", k.best, "\n") plot(1:20, asw, type= "h", main = "pam() clustering assessment", xlab= "k (# clusters)", ylab = "average silhouette width") axis(1, k.best, paste("best",k.best,sep="\n"), col = "red", col.axis = "red")
The objects of class "partition"
represent a partitioning of a
dataset into clusters.
a "partition"
object is a list with the following
(and typically more) components:
clustering |
the clustering vector. An integer vector of length |
call |
the matched |
silinfo |
a list with all silhouette information, only available when
the number of clusters is non-trivial, i.e.,
This information is also needed to construct a silhouette plot of
the clustering, see Note that |
objective |
value of criterion maximized during the partitioning algorithm, may more than one entry for different stages. |
diss |
an object of class |
data |
a matrix containing the original or standardized data. This might be missing to save memory or when a dissimilarity matrix was given as input structure to the clustering method. |
These objects are returned from pam
, clara
or fanny
.
The "partition"
class has a method for the following generic functions:
plot
, clusplot
.
The following classes inherit from class "partition"
:
"pam"
, "clara"
and "fanny"
.
See pam.object
, clara.object
and
fanny.object
for details.
This dataset constitutes a description of 136 plant species according to biological attributes (morphological or reproductive)
data(plantTraits)
data(plantTraits)
A data frame with 136 observations on the following 31 variables.
pdias
Diaspore mass (mg)
longindex
Seed bank longevity
durflow
Flowering duration
height
Plant height, an ordered factor with levels
1
< 2
< ... < 8
.
begflow
Time of first flowering, an ordered factor with levels 1
< 2
< 3
< 4
< 5
< 6
< 7
< 8
< 9
mycor
Mycorrhizas, an ordered factor with levels 0
never < 1
sometimes< 2
always
vegaer
aerial vegetative propagation, an ordered
factor with levels 0
never < 1
present but limited< 2
important.
vegsout
underground vegetative propagation, an ordered
factor with 3 levels identical to vegaer
above.
autopoll
selfing pollination, an ordered factor with
levels 0
never < 1
rare < 2
often< the rule3
insects
insect pollination, an ordered factor with 5 levels 0
< ... < 4
.
wind
wind pollination, an ordered factor with 5 levels 0
< ... < 4
.
lign
a binary factor with levels 0:1
,
indicating if plant is woody.
piq
a binary factor indicating if plant is thorny.
ros
a binary factor indicating if plant is rosette.
semiros
semi-rosette plant, a binary factor (0
:
no; 1
: yes).
leafy
leafy plant, a binary factor.
suman
summer annual, a binary factor.
winan
winter annual, a binary factor.
monocarp
monocarpic perennial, a binary factor.
polycarp
polycarpic perennial, a binary factor.
seasaes
seasonal aestival leaves, a binary factor.
seashiv
seasonal hibernal leaves, a binary factor.
seasver
seasonal vernal leaves, a binary factor.
everalw
leaves always evergreen, a binary factor.
everparti
leaves partially evergreen, a binary factor.
elaio
fruits with an elaiosome (dispersed by ants), a binary factor.
endozoo
endozoochorous fruits, a binary factor.
epizoo
epizoochorous fruits, a binary factor.
aquat
aquatic dispersal fruits, a binary factor.
windgl
wind dispersed fruits, a binary factor.
unsp
unspecialized mechanism of seed dispersal, a binary factor.
Most of factor attributes are not disjunctive. For example, a plant can be usually pollinated by insects but sometimes self-pollination can occured.
Vallet, Jeanne (2005) Structuration de communautés végétales et analyse comparative de traits biologiques le long d'un gradient d'urbanisation. Mémoire de Master 2 'Ecologie-Biodiversité-Evolution'; Université Paris Sud XI, 30p.+ annexes (in french)
data(plantTraits) ## Calculation of a dissimilarity matrix library(cluster) dai.b <- daisy(plantTraits, type = list(ordratio = 4:11, symm = 12:13, asymm = 14:31)) ## Hierarchical classification agn.trts <- agnes(dai.b, method="ward") plot(agn.trts, which.plots = 2, cex= 0.6) plot(agn.trts, which.plots = 1) cutree6 <- cutree(agn.trts, k=6) cutree6 ## Principal Coordinate Analysis cmdsdai.b <- cmdscale(dai.b, k=6) plot(cmdsdai.b[, 1:2], asp = 1, col = cutree6)
data(plantTraits) ## Calculation of a dissimilarity matrix library(cluster) dai.b <- daisy(plantTraits, type = list(ordratio = 4:11, symm = 12:13, asymm = 14:31)) ## Hierarchical classification agn.trts <- agnes(dai.b, method="ward") plot(agn.trts, which.plots = 2, cex= 0.6) plot(agn.trts, which.plots = 1) cutree6 <- cutree(agn.trts, k=6) cutree6 ## Principal Coordinate Analysis cmdsdai.b <- cmdscale(dai.b, k=6) plot(cmdsdai.b[, 1:2], asp = 1, col = cutree6)
Creates plots for visualizing an agnes
object.
## S3 method for class 'agnes' plot(x, ask = FALSE, which.plots = NULL, main = NULL, sub = paste("Agglomerative Coefficient = ",round(x$ac, digits = 2)), adj = 0, nmax.lab = 35, max.strlen = 5, xax.pretty = TRUE, ...)
## S3 method for class 'agnes' plot(x, ask = FALSE, which.plots = NULL, main = NULL, sub = paste("Agglomerative Coefficient = ",round(x$ac, digits = 2)), adj = 0, nmax.lab = 35, max.strlen = 5, xax.pretty = TRUE, ...)
x |
an object of class |
ask |
logical; if true and |
which.plots |
integer vector or NULL (default), the latter
producing both plots. Otherwise, |
main , sub
|
main and sub title for the plot, with convenient
defaults. See documentation for these arguments in |
adj |
for label adjustment in |
nmax.lab |
integer indicating the number of labels which is considered too large for single-name labelling the banner plot. |
max.strlen |
positive integer giving the length to which strings are truncated in banner plot labeling. |
xax.pretty |
logical or integer indicating if
|
... |
graphical parameters (see |
When ask = TRUE
, rather than producing each plot sequentially,
plot.agnes
displays a menu listing all the plots that can be produced.
If the menu is not desired but a pause between plots is still wanted
one must set par(ask= TRUE)
before invoking the plot command.
The banner displays the hierarchy of clusters, and is equivalent to a tree.
See Rousseeuw (1986) or chapter 5 of Kaufman and Rousseeuw (1990).
The banner plots distances at which observations and clusters are merged.
The observations are listed in the order found by the agnes
algorithm,
and the numbers in the height
vector are represented as bars
between the observations.
The leaves of the clustering tree are the original observations. Two branches come together at the distance between the two clusters being merged.
For more customization of the plots, rather call
bannerplot
and pltree()
, i.e., its method
pltree.twins
, respectively.
directly with
corresponding arguments, e.g., xlab
or ylab
.
Appropriate plots are produced on the current graphics device. This can
be one or both of the following choices:
Banner
Clustering tree
In the banner plot, observation labels are only printed when the
number of observations is limited less than nmax.lab
(35, by
default), for readability. Moreover, observation labels are truncated
to maximally max.strlen
(5) characters.
For the dendrogram, more flexibility than via pltree()
is
provided by dg <- as.dendrogram(x)
and
plotting dg
via plot.dendrogram
.
Kaufman, L. and Rousseeuw, P.J. (1990) Finding Groups in Data: An Introduction to Cluster Analysis. Wiley, New York.
Rousseeuw, P.J. (1986). A visual display for hierarchical classification, in Data Analysis and Informatics 4; edited by E. Diday, Y. Escoufier, L. Lebart, J. Pages, Y. Schektman, and R. Tomassone. North-Holland, Amsterdam, 743–748.
Struyf, A., Hubert, M. and Rousseeuw, P.J. (1997) Integrating Robust Clustering Techniques in S-PLUS, Computational Statistics and Data Analysis, 26, 17–37.
agnes
and agnes.object
;
bannerplot
, pltree.twins
,
and par
.
## Can also pass 'labels' to pltree() and bannerplot(): data(iris) cS <- as.character(Sp <- iris$Species) cS[Sp == "setosa"] <- "S" cS[Sp == "versicolor"] <- "V" cS[Sp == "virginica"] <- "g" ai <- agnes(iris[, 1:4]) plot(ai, labels = cS, nmax = 150)# bannerplot labels are mess
## Can also pass 'labels' to pltree() and bannerplot(): data(iris) cS <- as.character(Sp <- iris$Species) cS[Sp == "setosa"] <- "S" cS[Sp == "versicolor"] <- "V" cS[Sp == "virginica"] <- "g" ai <- agnes(iris[, 1:4]) plot(ai, labels = cS, nmax = 150)# bannerplot labels are mess
Creates plots for visualizing a diana
object.
## S3 method for class 'diana' plot(x, ask = FALSE, which.plots = NULL, main = NULL, sub = paste("Divisive Coefficient = ", round(x$dc, digits = 2)), adj = 0, nmax.lab = 35, max.strlen = 5, xax.pretty = TRUE, ...)
## S3 method for class 'diana' plot(x, ask = FALSE, which.plots = NULL, main = NULL, sub = paste("Divisive Coefficient = ", round(x$dc, digits = 2)), adj = 0, nmax.lab = 35, max.strlen = 5, xax.pretty = TRUE, ...)
x |
an object of class |
ask |
logical; if true and |
which.plots |
integer vector or NULL (default), the latter
producing both plots. Otherwise, |
main , sub
|
main and sub title for the plot, each with a convenient
default. See documentation for these arguments in
|
adj |
for label adjustment in |
nmax.lab |
integer indicating the number of labels which is considered too large for single-name labelling the banner plot. |
max.strlen |
positive integer giving the length to which strings are truncated in banner plot labeling. |
xax.pretty |
logical or integer indicating if
|
... |
graphical parameters (see |
When ask = TRUE
, rather than producing each plot sequentially,
plot.diana
displays a menu listing all the plots that can be produced.
If the menu is not desired but a pause between plots is still wanted
one must set par(ask= TRUE)
before invoking the plot command.
The banner displays the hierarchy of clusters, and is equivalent to a tree.
See Rousseeuw (1986) or chapter 6 of Kaufman and Rousseeuw (1990).
The banner plots the diameter of each cluster being splitted.
The observations are listed in the order found by the diana
algorithm, and the numbers in the height
vector are represented
as bars between the observations.
The leaves of the clustering tree are the original observations. A branch splits up at the diameter of the cluster being splitted.
An appropriate plot is produced on the current graphics device. This can
be one or both of the following choices:
Banner
Clustering tree
In the banner plot,
observation labels are only printed when the number of observations is
limited less than nmax.lab
(35, by default), for readability.
Moreover, observation labels are truncated to maximally
max.strlen
(5) characters.
see those in plot.agnes
.
diana
, diana.object
,
twins.object
, par
.
example(diana)# -> dv <- diana(....) plot(dv, which = 1, nmax.lab = 100) ## wider labels : op <- par(mar = par("mar") + c(0, 2, 0,0)) plot(dv, which = 1, nmax.lab = 100, max.strlen = 12) par(op)
example(diana)# -> dv <- diana(....) plot(dv, which = 1, nmax.lab = 100) ## wider labels : op <- par(mar = par("mar") + c(0, 2, 0,0)) plot(dv, which = 1, nmax.lab = 100, max.strlen = 12) par(op)
Creates the banner of a mona
object.
## S3 method for class 'mona' plot(x, main = paste("Banner of ", deparse1(x$call)), sub = NULL, xlab = "Separation step", col = c(2,0), axes = TRUE, adj = 0, nmax.lab = 35, max.strlen = 5, ...)
## S3 method for class 'mona' plot(x, main = paste("Banner of ", deparse1(x$call)), sub = NULL, xlab = "Separation step", col = c(2,0), axes = TRUE, adj = 0, nmax.lab = 35, max.strlen = 5, ...)
x |
an object of class |
main , sub
|
main and sub titles for the plot, with convenient
defaults. See documentation in |
xlab |
x axis label, see |
col , adj
|
graphical parameters passed to |
axes |
logical, indicating if (labeled) axes should be drawn. |
nmax.lab |
integer indicating the number of labels which is considered too large for labeling. |
max.strlen |
positive integer giving the length to which strings are truncated in labeling. |
... |
further graphical arguments are passed to
|
Plots the separation step at which clusters are splitted. The
observations are given in the order found by the mona
algorithm, the numbers in the step
vector are represented as
bars between the observations.
When a long bar is drawn between two observations, those observations have the same value for each variable. See chapter 7 of Kaufman and Rousseeuw (1990).
A banner is plotted on the current graphics device.
In the banner plot,
observation labels are only printed when the number of observations is
limited less than nmax.lab
(35, by default), for readability.
Moreover, observation labels are truncated to maximally
max.strlen
(5) characters.
see those in plot.agnes
.
mona
, mona.object
, par
.
Creates plots for visualizing a partition
object.
## S3 method for class 'partition' plot(x, ask = FALSE, which.plots = NULL, nmax.lab = 40, max.strlen = 5, data = x$data, dist = NULL, stand = FALSE, lines = 2, shade = FALSE, color = FALSE, labels = 0, plotchar = TRUE, span = TRUE, xlim = NULL, ylim = NULL, main = NULL, ...)
## S3 method for class 'partition' plot(x, ask = FALSE, which.plots = NULL, nmax.lab = 40, max.strlen = 5, data = x$data, dist = NULL, stand = FALSE, lines = 2, shade = FALSE, color = FALSE, labels = 0, plotchar = TRUE, span = TRUE, xlim = NULL, ylim = NULL, main = NULL, ...)
x |
an object of class |
ask |
logical; if true and |
which.plots |
integer vector or NULL (default), the latter
producing both plots. Otherwise, |
nmax.lab |
integer indicating the number of labels which is considered too large for single-name labeling the silhouette plot. |
max.strlen |
positive integer giving the length to which strings are truncated in silhouette plot labeling. |
data |
numeric matrix with the scaled data; per default taken
from the partition object |
dist |
when |
stand , lines , shade , color , labels , plotchar , span , xlim , ylim , main , ...
|
All optional arguments available for the |
When ask= TRUE
, rather than producing each plot sequentially,
plot.partition
displays a menu listing all the plots that can
be produced.
If the menu is not desired but a pause between plots is still wanted,
call par(ask= TRUE)
before invoking the plot command.
The clusplot of a cluster partition consists of a two-dimensional
representation of the observations, in which the clusters are
indicated by ellipses (see clusplot.partition
for more
details).
The silhouette plot of a nonhierarchical clustering is fully
described in Rousseeuw (1987) and in chapter 2 of Kaufman and
Rousseeuw (1990).
For each observation i, a bar is drawn, representing its silhouette
width s(i), see silhouette
for details.
Observations are grouped per cluster, starting with cluster 1 at the
top. Observations with a large s(i) (almost 1) are very well
clustered, a small s(i) (around 0) means that the observation lies
between two clusters, and observations with a negative s(i) are
probably placed in the wrong cluster.
A clustering can be performed for several values of k
(the number of
clusters). Finally, choose the value of k
with the largest overall
average silhouette width.
An appropriate plot is produced on the current graphics device. This
can be one or both of the following choices:
Clusplot
Silhouette plot
In the silhouette plot, observation labels are only printed when the
number of observations is less than nmax.lab
(40, by default),
for readability. Moreover, observation labels are truncated to
maximally max.strlen
(5) characters.
For more flexibility, use plot(silhouette(x), ...)
, see
plot.silhouette
.
Rousseeuw, P.J. (1987) Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math., 20, 53–65.
Further, the references in plot.agnes
.
partition.object
, clusplot.partition
,
clusplot.default
, pam
,
pam.object
, clara
,
clara.object
, fanny
,
fanny.object
, par
.
## generate 25 objects, divided into 2 clusters. x <- rbind(cbind(rnorm(10,0,0.5), rnorm(10,0,0.5)), cbind(rnorm(15,5,0.5), rnorm(15,5,0.5))) plot(pam(x, 2)) ## Save space not keeping data in clus.object, and still clusplot() it: data(xclara) cx <- clara(xclara, 3, keep.data = FALSE) cx$data # is NULL plot(cx, data = xclara)
## generate 25 objects, divided into 2 clusters. x <- rbind(cbind(rnorm(10,0,0.5), rnorm(10,0,0.5)), cbind(rnorm(15,5,0.5), rnorm(15,5,0.5))) plot(pam(x, 2)) ## Save space not keeping data in clus.object, and still clusplot() it: data(xclara) cx <- clara(xclara, 3, keep.data = FALSE) cx$data # is NULL plot(cx, data = xclara)
pltree()
Draws a clustering tree (“dendrogram”) on the
current graphics device. We provide the twins
method draws the
tree of a twins
object, i.e., hierarchical clustering,
typically resulting from agnes()
or diana()
.
pltree(x, ...) ## S3 method for class 'twins' pltree(x, main = paste("Dendrogram of ", deparse1(x$call)), labels = NULL, ylab = "Height", ...)
pltree(x, ...) ## S3 method for class 'twins' pltree(x, main = paste("Dendrogram of ", deparse1(x$call)), labels = NULL, ylab = "Height", ...)
x |
in general, an R object for which a |
main |
main title with a sensible default. |
labels |
labels to use; the default is constructed from |
ylab |
label for y-axis. |
... |
graphical parameters (see |
Creates a plot of a clustering tree given a twins
object. The
leaves of the tree are the original observations. In case of an
agglomerative clustering, two branches come together at the distance
between the two clusters being merged. For a divisive clustering, a
branch splits up at the diameter of the cluster being splitted.
Note that currently the method function simply calls
plot(as.hclust(x), ...)
, which dispatches to
plot.hclust(..)
. If more flexible plots are needed,
consider xx <- as.dendrogram(as.hclust(x))
and plotting
xx
, see plot.dendrogram
.
a NULL value is returned.
agnes
, agnes.object
, diana
,
diana.object
, hclust
, par
,
plot.agnes
, plot.diana
.
data(votes.repub) agn <- agnes(votes.repub) pltree(agn) dagn <- as.dendrogram(as.hclust(agn)) dagn2 <- as.dendrogram(as.hclust(agn), hang = 0.2) op <- par(mar = par("mar") + c(0,0,0, 2)) # more space to the right plot(dagn2, horiz = TRUE) plot(dagn, horiz = TRUE, center = TRUE, nodePar = list(lab.cex = 0.6, lab.col = "forest green", pch = NA), main = deparse(agn$call)) par(op)
data(votes.repub) agn <- agnes(votes.repub) pltree(agn) dagn <- as.dendrogram(as.hclust(agn)) dagn2 <- as.dendrogram(as.hclust(agn), hang = 0.2) op <- par(mar = par("mar") + c(0,0,0, 2)) # more space to the right plot(dagn2, horiz = TRUE) plot(dagn, horiz = TRUE, center = TRUE, nodePar = list(lab.cex = 0.6, lab.col = "forest green", pch = NA), main = deparse(agn$call)) par(op)
The pluton
data frame has 45 rows and 4 columns,
containing percentages of isotopic composition of 45 Plutonium
batches.
data(pluton)
data(pluton)
This data frame contains the following columns:
the percentages of ,
always less than 2 percent.
the percentages of ,
typically between 60 and 80 percent (from neutron capture of Uranium,
).
percentage of the plutonium 240 isotope.
percentage of the plutonium 241 isotope.
Note that the percentage of plutonium~242 can be computed from the other four percentages, see the examples.
In the reference below it is explained why it is very desirable to combine these plutonium patches in three groups of similar size.
Available as ‘pluton.dat’ from the archive of the University of Antwerpen, ‘..../datasets/clusplot-examples.tar.gz’, no longer available.
Rousseeuw, P.J. and Kaufman, L and Trauwaert, E. (1996) Fuzzy clustering using scatter matrices, Computational Statistics and Data Analysis 23(1), 135–151.
data(pluton) hist(apply(pluton,1,sum), col = "gray") # between 94% and 100% pu5 <- pluton pu5$Pu242 <- 100 - apply(pluton,1,sum) # the remaining isotope. pairs(pu5)
data(pluton) hist(apply(pluton,1,sum), col = "gray") # between 94% and 100% pu5 <- pluton pu5$Pu242 <- 100 - apply(pluton,1,sum) # the remaining isotope. pairs(pu5)
Compute points on the ellipsoid boundary, mostly for drawing.
predict.ellipsoid(object, n.out=201, ...) ## S3 method for class 'ellipsoid' predict(object, n.out=201, ...) ellipsoidPoints(A, d2, loc, n.half = 201)
predict.ellipsoid(object, n.out=201, ...) ## S3 method for class 'ellipsoid' predict(object, n.out=201, ...) ellipsoidPoints(A, d2, loc, n.half = 201)
object |
an object of class |
n.out , n.half
|
half the number of points to create. |
A , d2 , loc
|
arguments of the auxilary |
... |
passed to and from methods. |
Note ellipsoidPoints
is the workhorse function of
predict.ellipsoid
a standalone function and method for
ellipsoid
objects, see ellipsoidhull
.
The class of object
is not checked; it must solely have valid
components loc
(length ), the
matrix
cov
(corresponding to A
) and d2
for the
center, the shape (“covariance”) matrix and the squared average
radius (or distance) or qchisq(*, p)
quantile.
Unfortunately, this is only implemented for , currently;
contributions for
are very welcome.
a numeric matrix of dimension 2*n.out
times .
ellipsoidhull
, volume.ellipsoid
.
## see also example(ellipsoidhull) ## Robust vs. L.S. covariance matrix set.seed(143) x <- rt(200, df=3) y <- 3*x + rt(200, df=2) plot(x,y, main="non-normal data (N=200)") mtext("with classical and robust cov.matrix ellipsoids") X <- cbind(x,y) C.ls <- cov(X) ; m.ls <- colMeans(X) d2.99 <- qchisq(0.99, df = 2) lines(ellipsoidPoints(C.ls, d2.99, loc=m.ls), col="green") if(require(MASS)) { Cxy <- cov.rob(cbind(x,y)) lines(ellipsoidPoints(Cxy$cov, d2 = d2.99, loc=Cxy$center), col="red") }# MASS
## see also example(ellipsoidhull) ## Robust vs. L.S. covariance matrix set.seed(143) x <- rt(200, df=3) y <- 3*x + rt(200, df=2) plot(x,y, main="non-normal data (N=200)") mtext("with classical and robust cov.matrix ellipsoids") X <- cbind(x,y) C.ls <- cov(X) ; m.ls <- colMeans(X) d2.99 <- qchisq(0.99, df = 2) lines(ellipsoidPoints(C.ls, d2.99, loc=m.ls), col="green") if(require(MASS)) { Cxy <- cov.rob(cbind(x,y)) lines(ellipsoidPoints(Cxy$cov, d2 = d2.99, loc=Cxy$center), col="red") }# MASS
Prints the call, agglomerative coefficient, ordering of objects and
distances between merging clusters ('Height') of an agnes
object.
This is a method for the generic print()
function for objects
inheriting from class agnes
, see agnes.object
.
## S3 method for class 'agnes' print(x, ...)
## S3 method for class 'agnes' print(x, ...)
x |
an agnes object. |
... |
potential further arguments (required by generic). |
summary.agnes
producing more output;
agnes
, agnes.object
, print
,
print.default
.
Prints the best sample, medoids, clustering vector and objective function
of clara
object.
This is a method for the function print()
for objects
inheriting from class clara
.
## S3 method for class 'clara' print(x, ...)
## S3 method for class 'clara' print(x, ...)
x |
a clara object. |
... |
potential further arguments (require by generic). |
summary.clara
producing more output;
clara
, clara.object
, print
,
print.default
.
Prints the ordering of objects, diameters of splitted clusters,
and divisive coefficient of a diana
object.
This is a method for the function print()
for objects
inheriting from class diana
.
## S3 method for class 'diana' print(x, ...)
## S3 method for class 'diana' print(x, ...)
x |
a diana object. |
... |
potential further arguments (require by generic). |
diana
, diana.object
, print
,
print.default
.
Print or summarize the distances and the attributes of a
dissimilarity
object.
These are methods for the functions print()
and summary()
for
dissimilarity
objects. See print
, print.default
,
or summary
for the general behavior of these.
## S3 method for class 'dissimilarity' print(x, diag = NULL, upper = NULL, digits = getOption("digits"), justify = "none", right = TRUE, ...) ## S3 method for class 'dissimilarity' summary(object, digits = max(3, getOption("digits") - 2), ...) ## S3 method for class 'summary.dissimilarity' print(x, ...)
## S3 method for class 'dissimilarity' print(x, diag = NULL, upper = NULL, digits = getOption("digits"), justify = "none", right = TRUE, ...) ## S3 method for class 'dissimilarity' summary(object, digits = max(3, getOption("digits") - 2), ...) ## S3 method for class 'summary.dissimilarity' print(x, ...)
x , object
|
a |
digits |
the number of digits to use, see |
diag , upper , justify , right
|
optional arguments specifying how
the triangular dissimilarity matrix is printed; see
|
... |
potential further arguments (require by generic). |
daisy
, dissimilarity.object
,
print
, print.default
, print.dist
.
## See example(daisy) sd <- summary(daisy(matrix(rnorm(100), 20,5))) sd # -> print.summary.dissimilarity(.) str(sd)
## See example(daisy) sd <- summary(daisy(matrix(rnorm(100), 20,5))) sd # -> print.summary.dissimilarity(.) str(sd)
Prints the objective function, membership coefficients and clustering vector
of fanny
object.
This is a method for the function print()
for objects
inheriting from class fanny
.
## S3 method for class 'fanny' print(x, digits = getOption("digits"), ...) ## S3 method for class 'fanny' summary(object, ...) ## S3 method for class 'summary.fanny' print(x, digits = getOption("digits"), ...)
## S3 method for class 'fanny' print(x, digits = getOption("digits"), ...) ## S3 method for class 'fanny' summary(object, ...) ## S3 method for class 'summary.fanny' print(x, digits = getOption("digits"), ...)
x , object
|
a |
digits |
number of significant digits for printing, see
|
... |
potential further arguments (required by generic). |
fanny
, fanny.object
, print
,
print.default
.
Prints the ordering of objects, separation steps, and used variables
of a mona
object.
This is a method for the function print()
for objects
inheriting from class mona
.
## S3 method for class 'mona' print(x, ...)
## S3 method for class 'mona' print(x, ...)
x |
a mona object. |
... |
potential further arguments (require by generic). |
mona
, mona.object
, print
,
print.default
.
Prints the medoids, clustering vector and objective function
of pam
object.
This is a method for the function print()
for objects
inheriting from class pam
.
## S3 method for class 'pam' print(x, ...)
## S3 method for class 'pam' print(x, ...)
x |
a pam object. |
... |
potential further arguments (require by generic). |
pam
, pam.object
, print
,
print.default
.
The Ruspini data set, consisting of 75 points in four groups that is popular for illustrating clustering techniques.
data(ruspini)
data(ruspini)
A data frame with 75 observations on 2 variables giving the x and y coordinates of the points, respectively.
E. H. Ruspini (1970) Numerical methods for fuzzy clustering. Inform. Sci. 2, 319–350.
see those in agnes
.
data(ruspini) ## Plot similar to Figure 4 in Stryuf et al (1996) ## Not run: plot(pam(ruspini, 4), ask = TRUE) ## Plot similar to Figure 6 in Stryuf et al (1996) plot(fanny(ruspini, 5))
data(ruspini) ## Plot similar to Figure 4 in Stryuf et al (1996) ## Not run: plot(pam(ruspini, 4), ask = TRUE) ## Plot similar to Figure 6 in Stryuf et al (1996) plot(fanny(ruspini, 5))
Compute silhouette information according to a given clustering in
clusters.
silhouette(x, ...) ## Default S3 method: silhouette(x, dist, dmatrix, ...) ## S3 method for class 'partition' silhouette(x, ...) ## S3 method for class 'clara' silhouette(x, full = FALSE, subset = NULL, ...) sortSilhouette(object, ...) ## S3 method for class 'silhouette' summary(object, FUN = mean, ...) ## S3 method for class 'silhouette' plot(x, nmax.lab = 40, max.strlen = 5, main = NULL, sub = NULL, xlab = expression("Silhouette width "* s[i]), col = "gray", do.col.sort = length(col) > 1, border = 0, cex.names = par("cex.axis"), do.n.k = TRUE, do.clus.stat = TRUE, ...)
silhouette(x, ...) ## Default S3 method: silhouette(x, dist, dmatrix, ...) ## S3 method for class 'partition' silhouette(x, ...) ## S3 method for class 'clara' silhouette(x, full = FALSE, subset = NULL, ...) sortSilhouette(object, ...) ## S3 method for class 'silhouette' summary(object, FUN = mean, ...) ## S3 method for class 'silhouette' plot(x, nmax.lab = 40, max.strlen = 5, main = NULL, sub = NULL, xlab = expression("Silhouette width "* s[i]), col = "gray", do.col.sort = length(col) > 1, border = 0, cex.names = par("cex.axis"), do.n.k = TRUE, do.clus.stat = TRUE, ...)
x |
an object of appropriate class; for the |
dist |
a dissimilarity object inheriting from class
|
dmatrix |
a symmetric dissimilarity matrix ( |
full |
logical or number in |
subset |
a subset from |
object |
an object of class |
... |
further arguments passed to and from methods. |
FUN |
function used to summarize silhouette widths. |
nmax.lab |
integer indicating the number of labels which is considered too large for single-name labeling the silhouette plot. |
max.strlen |
positive integer giving the length to which strings are truncated in silhouette plot labeling. |
main , sub , xlab
|
arguments to |
col , border , cex.names
|
arguments passed
|
do.col.sort |
logical indicating if the colors |
do.n.k |
logical indicating if |
do.clus.stat |
logical indicating if cluster size and averages should be written right to the silhouettes. |
For each observation i, the silhouette width is
defined as follows:
Put a(i) = average dissimilarity between i and all other points of the
cluster to which i belongs (if i is the only observation in
its cluster, without further calculations).
For all other clusters C, put
= average
dissimilarity of i to all observations of C. The smallest of these
is
,
and can be seen as the dissimilarity between i and its “neighbor”
cluster, i.e., the nearest one to which it does not belong.
Finally,
silhouette.default()
is now based on C code donated by Romain
Francois (the R version being still available as cluster:::silhouetteR
).
Observations with a large (almost 1) are very well
clustered, a small
(around 0) means that the observation
lies between two clusters, and observations with a negative
are probably placed in the wrong cluster.
silhouette()
returns an object, sil
, of class
silhouette
which is an matrix with
attributes. For each observation i,
sil[i,]
contains the
cluster to which i belongs as well as the neighbor cluster of i (the
cluster, not containing i, for which the average dissimilarity between its
observations and i is minimal), and the silhouette width of
the observation. The
colnames
correspondingly are
c("cluster", "neighbor", "sil_width")
.
summary(sil)
returns an object of class
summary.silhouette
, a list with components
si.summary
:numerical summary
of the
individual silhouette widths .
clus.avg.widths
:numeric (rank 1) array of clusterwise
means of silhouette widths where mean = FUN
is used.
avg.width
:the total mean FUN(s)
where
s
are the individual silhouette widths.
clus.sizes
:table
of the cluster sizes.
call
:if available, the call
creating sil
.
Ordered
:logical identical to attr(sil, "Ordered")
,
see below.
sortSilhouette(sil)
orders the rows of sil
as in the
silhouette plot, by cluster (increasingly) and decreasing silhouette
width .
attr(sil, "Ordered")
is a logical indicating if sil
is
ordered as by sortSilhouette()
. In that case,
rownames(sil)
will contain case labels or numbers, and attr(sil, "iOrd")
the ordering index vector.
While silhouette()
is intrinsic to the
partition
clusterings, and hence has a (trivial) method
for these, it is straightforward to get silhouettes from hierarchical
clusterings from silhouette.default()
with
cutree()
and distance as input.
By default, for clara()
partitions, the silhouette is
just for the best random subset used. Use full = TRUE
to compute (and later possibly plot) the full silhouette.
Rousseeuw, P.J. (1987) Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math., 20, 53–65.
chapter 2 of Kaufman and Rousseeuw (1990), see
the references in plot.agnes
.
partition.object
, plot.partition
.
data(ruspini) pr4 <- pam(ruspini, 4) str(si <- silhouette(pr4)) (ssi <- summary(si)) plot(si) # silhouette plot plot(si, col = c("red", "green", "blue", "purple"))# with cluster-wise coloring si2 <- silhouette(pr4$clustering, dist(ruspini, "canberra")) summary(si2) # has small values: "canberra"'s fault plot(si2, nmax= 80, cex.names=0.6) op <- par(mfrow= c(3,2), oma= c(0,0, 3, 0), mgp= c(1.6,.8,0), mar= .1+c(4,2,2,2)) for(k in 2:6) plot(silhouette(pam(ruspini, k=k)), main = paste("k = ",k), do.n.k=FALSE) mtext("PAM(Ruspini) as in Kaufman & Rousseeuw, p.101", outer = TRUE, font = par("font.main"), cex = par("cex.main")); frame() ## the same with cluster-wise colours: c6 <- c("tomato", "forest green", "dark blue", "purple2", "goldenrod4", "gray20") for(k in 2:6) plot(silhouette(pam(ruspini, k=k)), main = paste("k = ",k), do.n.k=FALSE, col = c6[1:k]) par(op) ## clara(): standard silhouette is just for the best random subset data(xclara) set.seed(7) str(xc1k <- xclara[ sample(nrow(xclara), size = 1000) ,]) # rownames == indices cl3 <- clara(xc1k, 3) plot(silhouette(cl3))# only of the "best" subset of 46 ## The full silhouette: internally needs large (36 MB) dist object: sf <- silhouette(cl3, full = TRUE) ## this is the same as s.full <- silhouette(cl3$clustering, daisy(xc1k)) stopifnot(all.equal(sf, s.full, check.attributes = FALSE, tolerance = 0)) ## color dependent on original "3 groups of each 1000": % __FIXME ??__ plot(sf, col = 2+ as.integer(names(cl3$clustering) ) %/% 1000, main ="plot(silhouette(clara(.), full = TRUE))") ## Silhouette for a hierarchical clustering: ar <- agnes(ruspini) si3 <- silhouette(cutree(ar, k = 5), # k = 4 gave the same as pam() above daisy(ruspini)) stopifnot(is.data.frame(di3 <- as.data.frame(si3))) plot(si3, nmax = 80, cex.names = 0.5) ## 2 groups: Agnes() wasn't too good: si4 <- silhouette(cutree(ar, k = 2), daisy(ruspini)) plot(si4, nmax = 80, cex.names = 0.5)
data(ruspini) pr4 <- pam(ruspini, 4) str(si <- silhouette(pr4)) (ssi <- summary(si)) plot(si) # silhouette plot plot(si, col = c("red", "green", "blue", "purple"))# with cluster-wise coloring si2 <- silhouette(pr4$clustering, dist(ruspini, "canberra")) summary(si2) # has small values: "canberra"'s fault plot(si2, nmax= 80, cex.names=0.6) op <- par(mfrow= c(3,2), oma= c(0,0, 3, 0), mgp= c(1.6,.8,0), mar= .1+c(4,2,2,2)) for(k in 2:6) plot(silhouette(pam(ruspini, k=k)), main = paste("k = ",k), do.n.k=FALSE) mtext("PAM(Ruspini) as in Kaufman & Rousseeuw, p.101", outer = TRUE, font = par("font.main"), cex = par("cex.main")); frame() ## the same with cluster-wise colours: c6 <- c("tomato", "forest green", "dark blue", "purple2", "goldenrod4", "gray20") for(k in 2:6) plot(silhouette(pam(ruspini, k=k)), main = paste("k = ",k), do.n.k=FALSE, col = c6[1:k]) par(op) ## clara(): standard silhouette is just for the best random subset data(xclara) set.seed(7) str(xc1k <- xclara[ sample(nrow(xclara), size = 1000) ,]) # rownames == indices cl3 <- clara(xc1k, 3) plot(silhouette(cl3))# only of the "best" subset of 46 ## The full silhouette: internally needs large (36 MB) dist object: sf <- silhouette(cl3, full = TRUE) ## this is the same as s.full <- silhouette(cl3$clustering, daisy(xc1k)) stopifnot(all.equal(sf, s.full, check.attributes = FALSE, tolerance = 0)) ## color dependent on original "3 groups of each 1000": % __FIXME ??__ plot(sf, col = 2+ as.integer(names(cl3$clustering) ) %/% 1000, main ="plot(silhouette(clara(.), full = TRUE))") ## Silhouette for a hierarchical clustering: ar <- agnes(ruspini) si3 <- silhouette(cutree(ar, k = 5), # k = 4 gave the same as pam() above daisy(ruspini)) stopifnot(is.data.frame(di3 <- as.data.frame(si3))) plot(si3, nmax = 80, cex.names = 0.5) ## 2 groups: Agnes() wasn't too good: si4 <- silhouette(cutree(ar, k = 2), daisy(ruspini)) plot(si4, nmax = 80, cex.names = 0.5)
Returns the number of observations (sample size) corresponding to a dissimilarity like object, or equivalently, the number of rows or columns of a matrix when only the lower or upper triangular part (without diagonal) is given.
It is nothing else but the inverse function of .
sizeDiss(d)
sizeDiss(d)
d |
any R object with length (typically) |
a number; if
length(d) == n(n-1)/2
, NA
otherwise.
dissimilarity.object
and also
as.dist
for class dissimilarity
and
dist
objects which have a Size
attribute.
sizeDiss(1:10)# 5, since 10 == 5 * (5 - 1) / 2 sizeDiss(1:9) # NA n <- 1:100 stopifnot(n == sapply( n*(n-1)/2, function(n) sizeDiss(logical(n))))
sizeDiss(1:10)# 5, since 10 == 5 * (5 - 1) / 2 sizeDiss(1:9) # NA n <- 1:100 stopifnot(n == sapply( n*(n-1)/2, function(n) sizeDiss(logical(n))))
Returns (and prints) a summary list for an agnes
object.
Printing gives more output than the corresponding
print.agnes
method.
## S3 method for class 'agnes' summary(object, ...) ## S3 method for class 'summary.agnes' print(x, ...)
## S3 method for class 'agnes' summary(object, ...) ## S3 method for class 'summary.agnes' print(x, ...)
x , object
|
a |
... |
potential further arguments (require by generic). |
data(agriculture) summary(agnes(agriculture))
data(agriculture) summary(agnes(agriculture))
Returns (and prints) a summary list for a clara
object.
Printing gives more output than the corresponding
print.clara
method.
## S3 method for class 'clara' summary(object, ...) ## S3 method for class 'summary.clara' print(x, ...)
## S3 method for class 'clara' summary(object, ...) ## S3 method for class 'summary.clara' print(x, ...)
x , object
|
a |
... |
potential further arguments (require by generic). |
## generate 2000 objects, divided into 5 clusters. set.seed(47) x <- rbind(cbind(rnorm(400, 0,4), rnorm(400, 0,4)), cbind(rnorm(400,10,8), rnorm(400,40,6)), cbind(rnorm(400,30,4), rnorm(400, 0,4)), cbind(rnorm(400,40,4), rnorm(400,20,2)), cbind(rnorm(400,50,4), rnorm(400,50,4)) ) clx5 <- clara(x, 5) ## Mis'classification' table: table(rep(1:5, rep(400,5)), clx5$clust) # -> 1 "error" summary(clx5) ## Graphically: par(mfrow = c(3,1), mgp = c(1.5, 0.6, 0), mar = par("mar") - c(0,0,2,0)) plot(x, col = rep(2:6, rep(400,5))) plot(clx5)
## generate 2000 objects, divided into 5 clusters. set.seed(47) x <- rbind(cbind(rnorm(400, 0,4), rnorm(400, 0,4)), cbind(rnorm(400,10,8), rnorm(400,40,6)), cbind(rnorm(400,30,4), rnorm(400, 0,4)), cbind(rnorm(400,40,4), rnorm(400,20,2)), cbind(rnorm(400,50,4), rnorm(400,50,4)) ) clx5 <- clara(x, 5) ## Mis'classification' table: table(rep(1:5, rep(400,5)), clx5$clust) # -> 1 "error" summary(clx5) ## Graphically: par(mfrow = c(3,1), mgp = c(1.5, 0.6, 0), mar = par("mar") - c(0,0,2,0)) plot(x, col = rep(2:6, rep(400,5))) plot(clx5)
Returns (and prints) a summary list for a diana
object.
## S3 method for class 'diana' summary(object, ...) ## S3 method for class 'summary.diana' print(x, ...)
## S3 method for class 'diana' summary(object, ...) ## S3 method for class 'summary.diana' print(x, ...)
x , object
|
a |
... |
potential further arguments (require by generic). |
Returns (and prints) a summary list for a mona
object.
## S3 method for class 'mona' summary(object, ...) ## S3 method for class 'summary.mona' print(x, ...)
## S3 method for class 'mona' summary(object, ...) ## S3 method for class 'summary.mona' print(x, ...)
x , object
|
a |
... |
potential further arguments (require by generic). |
Summarize a pam
object and return an object
of class summary.pam
.
There's a print
method for the latter.
## S3 method for class 'pam' summary(object, ...) ## S3 method for class 'summary.pam' print(x, ...)
## S3 method for class 'pam' summary(object, ...) ## S3 method for class 'summary.pam' print(x, ...)
x , object
|
a |
... |
potential further arguments (require by generic). |
The objects of class "twins"
represent an agglomerative or
divisive (polythetic) hierarchical clustering of a dataset.
See agnes.object
and diana.object
for details.
This class of objects is returned from agnes
or diana
.
The "twins"
class has a method for the following generic function:
pltree
.
The following classes inherit from class "twins"
:
"agnes"
and "diana"
.
Compute the volume of geometric R object.
This is a generic function and has a method for ellipsoid
objects
(typically resulting from ellipsoidhull()
.
volume(object, ...) ## S3 method for class 'ellipsoid' volume(object, log = FALSE, ...)
volume(object, ...) ## S3 method for class 'ellipsoid' volume(object, log = FALSE, ...)
object |
an R object the volume of which is wanted; for the
|
log |
|
... |
potential further arguments of methods, e.g. |
a number, the volume (or
if
log = TRUE
) of
the given object
.
Martin Maechler (2002, extracting from former clusplot
code);
Keefe Murphy (2019) provided code for dimensions .
ellipsoidhull
for spanning ellipsoid computation.
## example(ellipsoidhull) # which defines 'ellipsoid' object <namefoo> myEl <- structure(list(cov = rbind(c(3,1),1:2), loc = c(0,0), d2 = 10), class = "ellipsoid") volume(myEl)# i.e. "area" here (d = 2) myEl # also mentions the "volume" set.seed(1) d5 <- matrix(rt(500, df=3), 100,5) e5 <- ellipsoidhull(d5)
## example(ellipsoidhull) # which defines 'ellipsoid' object <namefoo> myEl <- structure(list(cov = rbind(c(3,1),1:2), loc = c(0,0), d2 = 10), class = "ellipsoid") volume(myEl)# i.e. "area" here (d = 2) myEl # also mentions the "volume" set.seed(1) d5 <- matrix(rt(500, df=3), 100,5) e5 <- ellipsoidhull(d5)
A data frame with the percents of votes given to the republican candidate in presidential elections from 1856 to 1976. Rows represent the 50 states, and columns the 31 elections.
data(votes.repub)
data(votes.repub)
S. Peterson (1973): A Statistical History of the American Presidential Elections. New York: Frederick Ungar Publishing Co.
Data from 1964 to 1976 is from R. M. Scammon, American Votes 12, Congressional Quarterly.
An artificial data set consisting of 3000 points in 3 quite well-separated clusters.
data(xclara)
data(xclara)
A data frame with 3000 observations on 2 numeric variables (named
V1
and V2
) giving the
and
coordinates of the points, respectively.
Our version of the xclara
is slightly more rounded than the one
from read.table("xclara.dat")
and the relative
difference measured by all.equal
is 1.15e-7
for
V1
and 1.17e-7
for V2
which suggests that our
version has been the result of a options(digits = 7)
formatting.
Previously (before May 2017), it was claimed the three cluster were
each of size 1000, which is clearly wrong. pam(*, 3)
gives cluster sizes of 899, 1149, and 952, which apart from seven
“outliers” (or “mislabellings”) correspond to
observation indices ,
, and
, see the example.
Sample data set accompanying the reference below (file ‘xclara.dat’ in side ‘clus_examples.tar.gz’).
Anja Struyf, Mia Hubert & Peter J. Rousseeuw (1996) Clustering in an Object-Oriented Environment. Journal of Statistical Software 1. doi:10.18637/jss.v001.i04
## Visualization: Assuming groups are defined as {1:1000}, {1001:2000}, {2001:3000} plot(xclara, cex = 3/4, col = rep(1:3, each=1000)) p.ID <- c(78, 1411, 2535) ## PAM's medoid indices == pam(xclara, 3)$id.med text(xclara[p.ID,], labels = 1:3, cex=2, col=1:3) px <- pam(xclara, 3) ## takes ~2 seconds cxcl <- px$clustering ; iCl <- split(seq_along(cxcl), cxcl) boxplot(iCl, range = 0.7, horizontal=TRUE, main = "Indices of the 3 clusters of pam(xclara, 3)") ## Look more closely now: bxCl <- boxplot(iCl, range = 0.7, plot=FALSE) ## We see 3 + 2 + 2 = 7 clear "outlier"s or "wrong group" observations: with(bxCl, rbind(out, group)) ## out 1038 1451 1610 30 327 562 770 ## group 1 1 1 2 2 3 3 ## Apart from these, what are the robust ranges of indices? -- Robust range: t(iR <- bxCl$stats[c(1,5),]) ## 1 900 ## 901 2050 ## 2051 3000 gc <- adjustcolor("gray20",1/2) abline(v = iR, col = gc, lty=3) axis(3, at = c(0, iR[2,]), padj = 1.2, col=gc, col.axis=gc)
## Visualization: Assuming groups are defined as {1:1000}, {1001:2000}, {2001:3000} plot(xclara, cex = 3/4, col = rep(1:3, each=1000)) p.ID <- c(78, 1411, 2535) ## PAM's medoid indices == pam(xclara, 3)$id.med text(xclara[p.ID,], labels = 1:3, cex=2, col=1:3) px <- pam(xclara, 3) ## takes ~2 seconds cxcl <- px$clustering ; iCl <- split(seq_along(cxcl), cxcl) boxplot(iCl, range = 0.7, horizontal=TRUE, main = "Indices of the 3 clusters of pam(xclara, 3)") ## Look more closely now: bxCl <- boxplot(iCl, range = 0.7, plot=FALSE) ## We see 3 + 2 + 2 = 7 clear "outlier"s or "wrong group" observations: with(bxCl, rbind(out, group)) ## out 1038 1451 1610 30 327 562 770 ## group 1 1 1 2 2 3 3 ## Apart from these, what are the robust ranges of indices? -- Robust range: t(iR <- bxCl$stats[c(1,5),]) ## 1 900 ## 901 2050 ## 2051 3000 gc <- adjustcolor("gray20",1/2) abline(v = iR, col = gc, lty=3) axis(3, at = c(0, iR[2,]), padj = 1.2, col=gc, col.axis=gc)