Unsupervised classification is illustrated on the
titanic
dataset. It is a data.frame with 1309 observations
and 8 variables containing information on the passengers of the Titanic.
Each observation represents a passenger described by a set of real
variables: age in years (age
), ticket price in pounds
(fare
), a set of counting variables: number of
siblings/spouses aboard (sibsp
), number of parents/children
aboard (parch
) and a set of categorical variables:
sex
, ticket class (pclass
), port of
embarkation and a binary variable indicating if the passenger survived
(survived
). Furthermore, the dataset contains missing
values for three variables: age
, fare
and
embarked
.
## Le chargement a nécessité le package : RMixtCompUtilities
## pclass survived sex age sibsp parch fare embarked
## 1 1st 1 female 29.0 0 0 211.3375 S
## 16 1st 0 male NA 0 0 25.9250 S
## 38 1st 1 male NA 0 0 26.5500 S
## 169 1st 1 female 38.0 0 0 80.0000 <NA>
## 285 1st 1 female 62.0 0 0 80.0000 <NA>
## 1226 3rd 0 male 60.5 0 0 NA S
First, the dataset must be converted in the MixtComp format.
Categorical variables must be numbered from 1 to the number of
categories (e.g. 3 for embarked
). This can be done using
the refactorCategorical
function that takes in arguments
the vector containing the data, the old labels and the new labels.
Totaly missing values must be indicated with a ?
.
titanicMC <- titanic
titanicMC$sex <- refactorCategorical(titanic$sex, c("male", "female"), c(1, 2))
titanicMC$pclass <- refactorCategorical(titanic$pclass, c("1st", "2nd", "3rd"), c(1, 2, 3))
titanicMC$embarked <- refactorCategorical(titanic$embarked, c("C", "Q", "S"), c(1, 2, 3))
titanicMC$survived <- refactorCategorical(titanic$survived, c(0, 1), c(1, 2))
titanicMC[is.na(titanicMC)] = "?"
head(titanicMC)
## pclass survived sex age sibsp parch fare embarked
## 1 1 2 2 29 0 0 211.3375 3
## 2 1 2 1 0.9167 1 2 151.55 3
## 3 1 1 2 2 1 2 151.55 3
## 4 1 1 1 30 1 2 151.55 3
## 5 1 1 2 25 1 2 151.55 3
## 6 1 2 1 48 0 0 26.55 3
The dataset is splitted in 2 datasets for illustrating learning and prediction.
indTrain <- sample(nrow(titanicMC), floor(0.8 * nrow(titanicMC)))
titanicMCTrain <- titanicMC[indTrain, ]
titanicMCTest <- titanicMC[-indTrain, ]
Then, as all variables are stored as character in a data.frame, a
model
object indicating which model to use for each
variable is created. In this example, a gaussian model is used for
age
and fare
variables, a multinomial for
sex
, pclass
, embarked
and
survived
, a Poisson for sibsp
and
parch
.
We choose to run the clustering analysis for 1 to 20 clusters with 3
runs for every number of clusters. These runs can be parallelized using
the nCore
parameter.
## Warning in classicLearn(data, model, algo, nClass, criterion, nRun, nCore, : For k=19, MixtComp failed with the following error:
## mStep error in variable: fare
## Gaussian variables must have a minimum standard deviation of 1.e-8 in each class. It is not the case in class: 5. If some values are repeated often in this variable, maybe a Multinomial or a Poisson variable will describe it better.
## Warning in classicLearn(data, model, algo, nClass, criterion, nRun, nCore, : For k=20, MixtComp failed with the following error:
## mStep error in variable: fare
## Gaussian variables must have a minimum standard deviation of 1.e-8 in each class. It is not the case in class: 16. If some values are repeated often in this variable, maybe a Multinomial or a Poisson variable will describe it better.
## Warning in getBIC(x): The given MixtComp object only contains failed runs.
## Warning in getICL(x): The given MixtComp object only contains failed runs.
## Warning in getBIC(x): The given MixtComp object only contains failed runs.
## Warning in getICL(x): The given MixtComp object only contains failed runs.
summary
and plot
functions are used to have
an overview of the results for the best number of classes according to
the chosen criterion (BIC or ICL). If this number is not the one desired
by the user, it can been changed via the parameter
nClass
.
The summary
displays the number of clusters chosen and
some outputs as the discriminative power indicating the variables that
contribute most to class separation and parameters associated with the 3
most discriminant variables.
## ############### MixtCompLearn Run ###############
## nClass: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
## Criterion used: BIC
## 1 2 3 4 5 6 7
## BIC -14436.29 -12926.44 -12403.91 -11943.68 -11869.60 -11653.61 -11635.10
## ICL -14436.29 -12960.27 -12434.51 -11974.17 -11953.52 -11715.12 -11708.37
## 8 9 10 11 12 13 14
## BIC -11643.68 -11609.43 -11574.29 -11464.64 -11604.54 -11604.15 -11561.00
## ICL -11708.90 -11690.79 -11648.30 -11566.25 -11682.97 -11699.28 -11687.29
## 15 16 17 18 19 20
## BIC -11573.30 -11583.07 -11585.21 -11644.31 NaN NaN
## ICL -11711.77 -11707.00 -11725.44 -11767.65 NaN NaN
## Best model: 11 clusters
## ########### MixtComp Run ###########
## Number of individuals: 1047
## Number of variables: 8
## Number of clusters: 11
## Mode: learn
## Time: 0.396 s
## SEM burn-in iterations done: 50/50
## SEM run iterations done: 50/50
## Observed log-likelihood: -10970.93
## BIC: -11464.64
## ICL: -11566.25
## Discriminative power:
## fare pclass sibsp parch survived age sex embarked
## 0.626 0.419 0.178 0.171 0.141 0.126 0.123 0.120
## Proportions of the mixture:
## 0.054 0.099 0.086 0.064 0.068 0.182 0.147 0.078 0.053 0.143 0.027
## Parameters of the most discriminant variables:
## - fare: Gaussian
## mean sd
## k: 1 39.463 15.980
## k: 2 12.475 1.440
## k: 3 32.691 21.735
## k: 4 193.148 93.006
## k: 5 26.344 7.353
## k: 6 7.790 0.714
## k: 7 7.837 0.107
## k: 8 66.635 16.492
## k: 9 27.915 1.910
## k: 10 15.883 4.759
## k: 11 64.564 17.753
## - pclass: Multinomial
## modality 1 modality 2 modality 3
## k: 1 0.000 0.000 1.000
## k: 2 0.000 1.000 0.000
## k: 3 0.297 0.537 0.167
## k: 4 1.000 0.000 0.000
## k: 5 0.000 0.912 0.088
## k: 6 0.000 0.000 1.000
## k: 7 0.000 0.000 1.000
## k: 8 0.963 0.037 0.000
## k: 9 1.000 0.000 0.000
## k: 10 0.000 0.000 1.000
## k: 11 1.000 0.000 0.000
## - sibsp: Poisson
## lambda
## k: 1 3.661
## k: 2 0.000
## k: 3 0.500
## k: 4 0.742
## k: 5 0.797
## k: 6 0.092
## k: 7 0.000
## k: 8 0.655
## k: 9 0.000
## k: 10 0.868
## k: 11 0.407
## ####################################
The plot
function displayed the values of criteria, the
discriminative power of variables and the parameters of the three most
discriminative variable. More variables can be displayed using the
nVarMaxToPlot
parameter.
## $criteria
## Warning: Removed 4 rows containing missing values or values outside the scale range
## (`geom_line()`).
## Warning: Removed 4 rows containing missing values or values outside the scale range
## (`geom_point()`).
##
## $discrimPowerVar
##
## $proportion
##
## $fare
##
## $pclass
##
## $sibsp
The most discriminant variable for clustering are fare
and pclass
. The similarity between variables is shown with
the following code:
## fare age pclass survived sex embarked sibsp parch
## fare 1.00 0.34 0.38 0.35 0.34 0.36 0.35 0.35
## age 0.34 1.00 0.58 0.74 0.75 0.73 0.70 0.73
## pclass 0.38 0.58 1.00 0.59 0.57 0.58 0.53 0.54
## survived 0.35 0.74 0.59 1.00 0.82 0.75 0.70 0.72
## sex 0.34 0.75 0.57 0.82 1.00 0.75 0.71 0.74
## embarked 0.36 0.73 0.58 0.75 0.75 1.00 0.69 0.71
## sibsp 0.35 0.70 0.53 0.70 0.71 0.69 1.00 0.71
## parch 0.35 0.73 0.54 0.72 0.74 0.71 0.71 1.00
The greatest similarity is between survived
and
sex
, this relation is well-known in the dataset with a
great proportion of women surviving compared to men. On the contrary,
there is few similarity between fare
and other
variables.
Getters are available to easily access some results:
getBIC
, getICL
, getCompletedData
,
getParam
, getProportion
, getTik
,
getPartition
, … All these functions use the model
maximizing the asked criterion. If results for an other number of
classes is desired, the extractMixtCompObject
can be used.
For example:
## k: 1 k: 2 k: 3 k: 4 k: 5 k: 6 k: 7
## 0.05358852 0.09856459 0.08612440 0.06411483 0.06794258 0.18181818 0.14736842
## k: 8 k: 9 k: 10 k: 11
## 0.07846890 0.05263158 0.14258373 0.02679426
## k: 1 k: 2
## 0.5052531 0.4947469
Once a model is learnt, one can use it to predict the clusters of new individuals.
The probabilities of belonging to the different classes and the associated partition are given by:
## [,1] [,2] [,3] [,4] [,5]
## [1,] -Inf -Inf -0.017804592 -Inf -4.037187947
## [2,] -Inf -Inf -0.006935957 -Inf -4.974502173
## [3,] -Inf -Inf -44.871338392 -Inf 0.000000000
## [4,] -Inf -Inf -3.791785493 -Inf -0.022813555
## [5,] -Inf -Inf -7.823181878 -Inf -0.000400426
## [6,] -Inf -Inf -2.025351010 -Inf -0.141503106
## [1] 3 3 5 5 5 5