- Research
- Open Access

# On an ensemble algorithm for clustering cancer patient data

- Ran Qi
^{1}, - Dengyuan Wu
^{2}, - Li Sheng
^{3}, - Donald Henson
^{4}, - Arnold Schwartz
^{5}, - Eric Xu
^{6}, - Kai Xing
^{7}and - Dechang Chen
^{8}

**7 (Suppl 4)**:S9

https://doi.org/10.1186/1752-0509-7-S4-S9

© Qi et al.; licensee BioMed Central Ltd. 2013

**Published:**23 October 2013

## Abstract

### Background

The TNM staging system is based on three anatomic prognostic factors: Tumor, Lymph Node and Metastasis. However, cancer is no longer considered an anatomic disease. Therefore, the TNM should be expanded to accommodate new prognostic factors in order to increase the accuracy of estimating cancer patient outcome. The ensemble algorithm for clustering cancer data (EACCD) by Chen *et al*. reflects an effort to expand the TNM without changing its basic definitions. Though results on using EACCD have been reported, there has been no study on the analysis of the algorithm. In this report, we examine various aspects of EACCD using a large breast cancer patient dataset. We compared the output of EACCD with the corresponding survival curves, investigated the effect of different settings in EACCD, and compared EACCD with alternative clustering approaches.

### Results

Using the basic *T* and *N* definitions, EACCD generated a dendrogram that shows a graphic relationship among the survival curves of the breast cancer patients. The dendrograms from EACCD are robust for large values of *m* (the number of runs in the learning step). When *m* is large, the dendrograms depend on the linkage functions.

The statistical tests, however, employed in the learning step have minimal effect on the dendrogram for large *m*. In addition, if omitting the step for learning dissimilarity in EACCD, the resulting approaches can have a degraded performance. Furthermore, clustering only based on prognostic factors could generate misleading dendrograms, and direct use of partitioning techniques could lead to misleading assignments to clusters.

### Conclusions

When only the Partitioning Around Medoids (PAM) algorithm is involved in the step of learning dissimilarity, large values of *m* are required to obtain robust dendrograms, and for a large *m* EACCD can effectively cluster cancer patient data.

## Keywords

- Survival Curve
- Average Linkage
- Breast Cancer Data
- Ensemble Algorithm
- Additional Prognostic Factor

## Background

Accurate outcome (survival) estimation is often the key in the successful treatment of cancer patients. Estimation depends on clinical or laboratory variables or factors that are linked to patient outcome. Found in all specialties of medicine, predictive factors take on significant clinical meaning when treatment options are available, but they become more important if treatment options are limited and not always effective.

Currently, the most common predictive factors in cancer medicine are the three variables *T , N*, and *M* of the TNM (*T* umor, Lymph *N* ode, and *M* etastasis) staging system that define the anatomic extent of disease [1]. The "*T*" usually refers to the size of the primary tumor, "*N*" refers to the presence or absence of metastatic deposits in regional lymph nodes, and "*M*" indicates the presence of metastatic disease. With the TNM staging system, levels of these three variables are combined, and patients are classified into four stage groups according to different combinations of the levels. Then the outcome estimation of patients is based on the survival function estimated for each stage.

The TNM was created by surgeons primarily for surgery. However, cancer medicine no longer lives in the world where surgery remains the only treatment. The field of cancer is now characterized by screening and early detection, proteogenomics, multiple therapies, and a bewildering array of prognostic factors. Advances in molecular medicine, imaging, and therapeutics are now forcing us to integrate additional prognostic factors for more accurate estimation of patient outcome [2–5]. Therefore, to improve the estimation of outcome, methods are needed to incorporate additional prognostic factors into the TNM without changing the anatomic definitions.

The ensemble algorithm for clustering cancer data (EACCD) by Chen *et al.* [6] is designed to explore expansion of the TNM by integrating additional factors into the system. Though many results on using EACCD have been reported, there has been no study available to analyze the algorithm. In this report, we present an analysis of EACCD by using a large breast cancer dataset. We compared the output of EACCD with the corresponding survival curves, investigated the effect of different settings for EACCD, and compared EACCD with several other clustering approaches. This report represents an extensive expansion of the work in [7].

## Method

### EACCD

In this section, we describe the EACCD. Our presentation allows a collection of partition methods in constructing dissimilarities and thus is more general than that in [6]. Let the record for the *i* th patient be (*x*_{
i0
}*,x*_{
i1
},...,*x*_{
ip
}*,δ*_{
i
}), where *x*_{
i0
} equals the observed time (censored or un-censored survival time), *x*_{
ij
} are measurements on variables (factors) *X*_{
j
} for *j* = 1, ... , *p*, and *δ*_{
j
} is the event indicator which is defined to be 1 if the event (e.g., death) has occurred and 0 if the time on study is right-censored. Define a combination to be a set of(*x*_{
i0
}*,x*_{
i1
},...,*x*_{
ip
}*,δ*_{
i
}) that corresponds to one level of each variable (A continuous variable should be discretized). EACCD is an algorithm used to cluster combinations. In the algorithm, dissimilarity between two combinations is learnt by repeatedly using some clustering (partitioning) approaches based on criterion minimization, and then the learnt dissimilarity measure is used with a hierarchical clustering method in order to find final clusters of combinations. The algorithm involves the following three steps.

### Computing initial dissimilarity

*n*combinations

**x**

_{1},

**x**

_{2}, ... ,

**x**

_{ n }. Then the following initial dissimilarity measure $di{s}_{0}\left({x}_{i},{x}_{{i}^{\prime}}\right)$ is defined between two combinations

**x**

_{ i }and

**x**

_{ i' }:

Here *d*_{
0
} is the value of a test statistic (e.g., the log-rank test statistic [8]) used to determine if three is a difference in the survival functions between the two populations associated with **x**_{
i
} and **x**_{
i'
}. In general, $di{s}_{0}\left({x}_{i},{x}_{{i}^{\prime}}\right)$ assumes any non-negative value.

### Computing learnt dissimilarity

*C*denote a cluster assignment, assigning the

*i*th combination to a cluster, i.e.,

*C*(

**x**

_{ i }) ∈ ( {1, 2, ... ,

*K*} for a predetermined integer

*K*. The optimal assignment

*C** is obtained by minimizing the "within-cluster" scatter, i.e., by solving the following discrete optimization problem:

_{1}, x

_{2}, ... , x

_{ n }}, one

*K*and one clustering or partitioning method may be chosen to partition the data into

*K*clusters. However, the final assignment usually depends on the selected method and the initial reallocation. To overcome this, one can run this partition process

*m*times. Each time a number

*K*is randomly picked from a given interval [

*K*

_{1},

*K*

_{2}] and a partitioning procedure is also randomly selected. Define

*δ*

_{ l }(

*i, j*) = 1 if the

*l*th run of a procedure does not assign x

_{ i }and x

_{ j }into the same cluster; and

*δ*

_{ l }(

*i, j*) = 0 otherwise. And then define the following dissimilarity measure between two combinations

**x**

_{ i }and

**x**

_{ j }:

Note that *dis*(**x**_{
i
}, **x**_{
j
}) ranges from 0 to 1. A smaller value of *dis*(**x**_{
i
}, **x**_{
j
}) indicates that **x**_{
i
} and **x**_{
j
} most likely come from the same "hidden" group. In other words, a smaller dissimilarity *dis*(**x**_{
i
}, **x**_{
j
}) is expected to imply a smaller difference between the two survival functions associated with the two combinations.

### Hierarchical clustering

This step clusters the combinations by applying a linkage method [10] and the learnt dissimilarity *dis*(**x**_{
i
}, **x**_{
j
}). The primary output of EACCD is a dendrogram that provides a summary of the survival experiences based on the levels of prognostic factors, and thus has multiple applications.

The algorithm is outlined in Algorithm 1. Note that if only PAM is used for computing the learnt dissimilarity, then the algorithm reduces to that introduced in [6].

### Data set

*T*(tumor size),

*N*(nodal status),

*X*(survival time), and

*δ*(censoring status). The factors

*T*and

*N*have 3 and 4 categories, respectively, as listed in Table 1. Therefore there are 12(3 × 4) combinations based on

*T*and

*N*. And for convenience, we denoted by

*T*1

*N*0 the combination formed using categories

*T*1 and

*N*0, by

*T*1

*N*1 the combination formed using categories

*T*1 and

*N*1, and so on.

Definitions of *T* and *N* for SEER breast cancer cases from 1990-2000.

Prognostic factors | Categories | Level |
---|---|---|

Tumor size |
| 1 2 3 |

Nodal status |
| 1 2 3 4 |

**Algorithm 1**Ensemble algorithm for clustering cancer patient data

- 1.
Define the initial dissimilarity

*dis*_{0}in (1). - 2.
Obtain a collection of procedures for solving (2). Choose

*m, K*_{1}, and*K*_{2}, and run these procedures*m*times, where for each time, a procedure is randomly selected from the collection and a*K*is randomly chosen from the interval [*K*_{1},*K*_{2}]. Then construct the pairwise dissimilarity measure*dis*by using the equation (3). - 3.
Cluster the combinations by applying a linkage method and the learnt measure

*dis*.

### Evaluation of EACCD

We evaluated EACCD by performing a series of experiments using the programming language "R" [12]. The PAM algorithm was used in the second step of EACCD throughout the evaluation. Random medoids were initially selected for the PAM in all cases except for A_{4}, described below, where the default initial medoids in "R" were used.

The evaluation began with the application of the algorithm to clustering the breast cancer patients. We examined how the algorithm grouped the patients and compared this grouping with the possible grouping pattern exhibited in the survival curve plot. For the experiments, the log-rank test statistic [8] was used to determine the initial dissimilarity in the first step of the algorithm. In the second step we chose *K*_{1} = 2, *K*_{2} = 11 (the total number of combinations minus one). The PAM algorithm was repeatedly executed for *m* = 10000 times. In the third step, the average linkage hierarchical clustering technique [10] was used.

We then examined the effect of different settings in EACCD on the dendrogram generated by the algorithm. There were mainly three "factors" that could influence the final result in EACCD: test (the statistical test employed in determining the initial dissimilarity in Step 1 of the algorithm), *m* (the number of rounds of partitioning procedures performed in obtaining the learnt dissimilarity in Step 2) and the linkage function (the linkage function used in the hierarchical clustering procedure in Step 3). The effects of these "factors" were analyzed by varying their "values." While the value of *m* was chosen from {10, 20, 50, 100, 500, 1000, 5000, 10000, 20000, 30000}, we considered three tests (the log-rank test, the Gehan-Wilcoxon's test, and the Tarone and Ware's test [8]) and three linkage functions (the average linkage, the complete linkage, and the single linkage [10]).

Finally, we compared EACCD with four additional approaches that could be used to cluster the cancer patient data. These approaches were either straight forward or modifications of EACCD. Specifically the four approaches *A*_{1},*A*_{2},*A*_{3},*A*_{4} are described below. For demonstration, we used *m* = 10000, the log-rank test, and the average linkage for the setting of EACCD.

#### Approach A_{1}

This was tailored from the EACCD, omitting the learning step for dissimilarity. The initial dissimilarity measure *dis*_{0} in (1) was obtained first using the log-rank test and then standardized into 0[1] by the equation $di{s}_{{A}_{1}}^{S}=di{s}_{0}/max\phantom{\rule{2.77695pt}{0ex}}\left\{di{s}_{0}\right\}$. The standardized initial dissimilarity values were then used in the hierarchical clustering procedure with the average linkage function.

#### Approach A_{2}

In testing the differences between two survival curves associated with two combinations, a smaller p-value normally indicates a larger difference between the survival curves. Therefore, 1 − *p*, ranging from 0 to 1, could be used as the pairwise dissimilarity measure between two combinations in light of the survival. In the approach of *A*_{2}, this dissimilarity 1 − *p*, from the log-rank test, was directly used in the hierarchical clustering procedure with the average linkage function. The learning step for dissimilarity was not required.

#### Approach A_{3}

In *A*_{3}, we considered one traditional procedure in clustering the cancer data by using the two factors *T* and *N*. For each combination, let $\widehat{T}$ denote the average value of *T* and $\widehat{N}$ the average value of *N*. We could use $\widehat{T}$ and $\widehat{N}$ to represent the *T* and *N* value of the combination, respectively. Since $\widehat{T}$ has a much larger range than $\widehat{N},$ a linear transformation was performed to standardize $\widehat{T}$ and $\widehat{N}$ into 0[1] as ${\widehat{T}}^{s}$ = ($\widehat{T}$ − min{$\widehat{T}$})/(max{$\widehat{T}$}− min{$\widehat{T}$}) and ${\widehat{N}}^{s}$ = ($\widehat{N}$ − min{$\widehat{N}$})/(max{$\widehat{N}$}− min{$\widehat{N}$}). Let ${\widehat{T}}_{i}^{s}$ and ${\widehat{N}}_{i}^{s}$ be the standardized values for combination x_{
i
}. Then the dissimilarity between combinations x_{i} and x_{j} was defined as *dis*(x_{
i
}, x_{
j
})$=|{\widehat{T}}_{i}^{s}={\widehat{T}}_{j}^{s}|+{\widehat{N}}_{i}^{s}-{\widehat{N}}_{j}^{s}|.$ This dissimilarity *dis* was then standardized into the range of 0[1] using $di{s}_{{A}_{3}}^{s}=dis/\mathsf{\text{max}}\left\{dis\right\}$. Based on $di{s}_{{A}_{3}}^{s}$, hierarchical clustering with the average linkage was then performed.

#### Approach A_{4}

In *A*_{4} the PAM clustering algorithm was directly used to partition the cancer data. The quantity $di{s}_{{A}_{1}}^{s}$ in the approach *A*_{1} was taken as the input dissimilarity measurement. The number of clusters was set at 2, ... , 11, respectively, and thus 10 partition results were available.

## Results and discussion

### An application study

More specifically, the dendrogram provided an overall view of the relationship among the outcomes as the levels of prognostic factors were changed. We begin with the leftmost side or branch of Figure 1(a). The dissimilarity (difference) between the survival curve of *T* 1*N* 3 and the survival curve of *T* 3*N* 2 is 0.20. Merge *T* 1*N* 3 with *T* 3*N* 2 and denote by *T* 1*N* 3 + *T* 3*N* 2 the resulting group of patients. Then the difference between the survival curve of *T* 1*N* 3 + *T* 3*N* 2 and the survival curve of *T* 2*N* 3 is 0.41. Merge *T* 1*N* 3 + *T* 3*N* 2 with *T* 2*N* 3 and denote the resulting group of patients by *T* 1*N* 3 + *T* 3*N* 2 + *T* 2*N* 3. Then in light of survival, this group *T* 1*N* 3 + *T* 3*N* 2 + *T* 2*N* 3 differs from *T* 3*N* 3 by a value of 0.67. Merging *T* 3*N* 3 with *T* 1*N* 3 + *T* 3*N* 2 + *T* 2*N* 3 and denoting the resulting group by *T* 1*N* 3 + *T* 3*N* 2 + *T* 2*N* 3 + *T* 3*N* 3, then *T* 2*N* 2 + *T* 3*N* 1 differs from *T* 1*N* 3 + *T* 3*N* 2 + *T* 2*N* 3 + *T* 3*N* 3 by a value of 0.70 in terms of survival. Here *T* 2*N* 2 + *T* 3*N* 1 is the group from merging *T* 2*N* 2 with *T* 3*N* 1, where *T* 2*N* 2 differs from *T* 3*N* 1 by a value of 0.00. Denote by *T* 1*N* 3 + *T* 3*N* 2 + *T* 2*N* 3 + *T* 3*N* 3 + *T* 2*N* 2 + *T* 3*N* 1 the result from merging *T* 2*N* 2 + *T* 3*N* 1 and *T* 1*N* 3 + *T* 3*N* 2 + *T* 2*N* 3 + *T* 3*N* 3. The above shows the relationship among the survival curves of the combinations contained in the left branch of the dendrogram. A similar interpretation applies to the survival curves of the combinations in the right branch of the dendrogram. Finally, the left branch differs from the right branch by a value of 1.0 in light of survival. That is, 1.0 is the difference between the survival curve of the group *T* 1*N* 1 + *T* 2*N* 0 + *T* 3*N* 0 + *T* 1*N* 2 + *T* 2*N* 1 + *T* 1*N* 0 and the survival curve of the group *T* 1*N* 3 + *T* 3*N* 2 + *T* 2*N* 3 + *T* 3*N* 3 + *T* 2*N* 2 + *T* 3*N* 1.

The relationship among the survival curves exhibited in the dendrogram of *T* and *N* (Figure 1(a) ) can be confirmed by visually checking the 12 survival curves shown in Figure 1(b). These survival curves were constructed by the Kaplan-Meier procedure [8]. The survival curves in Figure 1(b) can be divided into two groups, group 1 consisting of the lower six curves and group 2 consisting of the upper six curves. The curves in group 1 and group 2 appear on the left and right branches in Figure 1(a), respectively of the dendrogram. Thus, from a practical perspective, the dendrogram initially divides the patients into those with a favorable outcome and those with an unfavorable outcome. A visual check of group 1 in Figure 1(b) shows certain differences among the curves. For instance, the two closest curves are the curve of *T* 2*N* 2 and the curve of *T* 3*N* 1, and the next two closest curves are the curves of *T* 1*N* 3 and *T* 3*N* 2. If we merge combinations in the order of increasing differences between survival rates, we would first merge *T* 2*N* 2 with *T* 3*N* 1, and then merge *T* 1*N* 3 with *T* 3*N* 2, merge *T* 1*N* 3 + *T* 3*N* 2 with *T* 2*N* 3, merge *T* 1*N* 3 + *T* 3*N* 2 + *T* 2*N* 3 with *T* 3*N* 3, and finally, merge *T* 1*N* 3 + *T* 3*N* 2 + *T* 2*N* 3 + *T* 3*N* 3 with *T* 2*N* 2 + *T* 3*N* 1. Clearly, this observation coincides with the relationship among survival curves depicted by the left branch of the dendrogram in Figure 1(a). Similarly, the right branch of the dendrogram captures the survival differences and the order of merging of the six curves in group 2.

### Effect of settings on EACCD

#### Effect of m

*dis*" in EACCD depends on the values of

*m*, which will be convergent when

*m*is sufficiently large. If on the the other hand,

*m*is small, the dissimilarity is not convergent and can be regarded as a variable. Thus, the resulting dendrograms will not be robust. Specifically, for a small value of

*m*, multiple runs of EACCD with the same test and same linkage may produce significantly different dendrograms. This is shown in Figures 2(a) and 2(b). However, when

*m*is large, the dendrograms for the same test and same linkage are virtually the same. For example, when

*m*= 10000, 20000, 30000, the dendrograms (Figures 3(d), (e), (f)) based on the Gehan-Wilcoxon's test and the complete linkage are similar, and the dendrograms (Figures 3(g), (h), (i)) based on the Tarone-Ware's test and the single linkage are almost identical. Therefore, a large

*m*should be used when applying EACCD.

#### Effect of tests and linkage functions

*m*. Figure 4 lists nine dendrograms for

*m*= 10000, the log-rank test, the Gehan-Wilcoxon's test, the Tarone and Ware's test, the average linkage, the complete linkage, and the single linkage. There were two observations, drawn by visualizing the figure horizontally and vertically. First, for a given test, the dendrograms based on different linkage functions exhibit the same merging pattern, but merging or fusion can occur at significantly different dissimilarity values. For example, with the log-rank test, the dendrogram from the average linkage has the same shape and merging pattern as the dendrogram from the complete linkage. For the average linkage,

*T*2

*N*2 +

*T*3

*N*1 is merged with

*T*1

*N*3 +

*T*3

*N*2 +

*T*2

*N*3 +

*T*3

*N*3 at the dissimilarity of 0.76. But that fusion occurs at the dissimilarity of 0.79 for the complete linkage. Second, for a given linkage, the dendrograms derived from different tests are virtually the same, which indicates that for a given linkage, test statistics have minimal influence on the dendrogram. For instance, Figures 4(a), (d), and 4(g) essentially show the same dendrogram for the average linkage and three tests (the log-rank test, the Gehan-Wilcoxon's test, and the Tarone and Ware's test).

In summary, our experiments have shown that a large *m* ( such as *m* ≥ 10000 ) should be used in EACCD. For a large *m*, different linkage functions can generate different dendrograms. But different statistical tests have minimal or no influence on the dendrogram.

### Comparisons with alternative approaches

#### Approach A_{1}

*A*

_{1}, a hierarchical clustering procedure with the average linkage was applied directly to the breast cancer data. The dissimilarity was determined by the value of the log-rank test statistic. The dendrogram is shown in Figure 5(a). It indicates that

*T*1

*N*0 becomes a separate group. The reason for this is stated as follows. Consider the set

*S*containing all the dissimilarities between one survival function and its "nearest" neighbor, which is identified visually from Figure 1(b). Computation shows that the dissimilarity between

*T*1

*N*0 and its nearest neighbor

*T*1

*N*1 is the maximum of

*S*and it is nearly 12 times larger than the second largest value in

*S*. According to the construction of the dendrogram,

*T*1

*N*0 is merged with the group of all the other eleven combinations at the last step in the hierarchical clustering procedure.

Note that the combination *T* 1*N* 0 contains significantly more patients than any other combination (Figure 1(b)). Other experiments showed that if the number of patients in *T* 1*N* 0 was reduced to a quantity comparable with the number of patients in other combinations, dendrograms from the approach *A*_{1} would have the same shape and merging pattern as in Figure 1(a). This suggests that *A*_{1} is sensitive to the relative size of the combinations.

#### Approach A_{2}

The approach *A*_{2} also used a hierarchical clustering procedure with the average linkage to directly cluster the breast cancer data. But in this approach, the dissimilarity was obtained by the p-value from the log-rank test. The dendrogram, shown in Figure 5(b), indicates that the merging steps on the top are not obvious for several combinations. The reason is simply that the dissimilarity 1 − *p* is 1 for most pairs of combinations, due to the rounding effect in computation.

#### Approach A_{3}

We employed *A*_{3} to cluster the data by using only *T* and *N*. Survival times were not used with this approach. The corresponding dendrogram is shown in Figure 5(c). Comparing Figure 5(c) with the survival curve plot in Figure 1(b), we can observe that the merging pattern described in the dendrogram at low levels of dissimilarity does not seem reasonable. For instance, the dendrogram indicates that *T* 2*N* 3 and *T* 1*N* 3 merge first and then they merge with *T* 3*N* 3 to form a group without *T* 3*N* 2, which is not reasonable in light of Figure 1(b). Therefore the traditional clustering procedure using *T* and *N* does not work here. The reason might be that *T* and *N* together could not capture the main information regarding the survival of cancer patients.

The approach *A*_{3} can be modified by incorporating the learning step, as in EACCD. One modification, denoted by ${A}_{3}^{*}$, is obtained by replacing *dis*_{0} in the first step of EACCD by $di{s}_{{A}_{3}}^{s}$ and then following steps 2 and 3 in EACCD with the average linkage. Figure 5(d) shows the dendrogram (*m* = 10000), which again presents unreasonable grouping assignments.

#### Approach A_{4}

*T*2

*N*1 from

*T*1

*N*2, which should be placed into the same group as indicated by the survival plot (Figure 1(b)). Therefore, partition of the data from EACCD is more consistent with the survival curves than that from the PAM.

Partition results for four clusters of SEER breast cancer data from 1990-2000.

EACCD | PAM | |
---|---|---|

Group 1 | T1N0 | T1N0 |

Group 2 | T1N1, T2N0, T3N0 | T1N1, T2N0, T3N0, T2N1 |

Group 3 | T1N2, T2N1 | T1N2, T2N2, T3N1 |

Group 4 | T1N3, T2N2, T2N3, T3N1, T3N2, T3N3 | T1N3, T2N3, T3N2, T3N3 |

In summary, the results of these comparisons have shown that 1) if the step for learning dissimilarity is omitted in EACCD, then the resulting approaches can have a degraded performance, 2) if survival times are not taken into account, then clustering based on prognostic factors will likely generate misleading dendrograms, and 3) direct applications of partitioning techniques to the data can lead to misleading assignments to clusters.

## Conclusion

This report presents a three pronged analysis of EACCD based on a breast cancer patient dataset. First, we examined whether grouping patients by EACCD was consistent with the "natural" grouping of survival curves derived directly from the data. Second, we investigated the effect of different settings in EACCD. Third, we compared EACCD with other clustering approaches. The results showed that if only the PAM is employed for learning dissimilarity, large values of *m* should be used with EACCD and that dendrograms generated from EACCD with the PAM and a large *m* primarily depend on the linkage functions and not on the statistical tests that are used in the learning step. The results also showed that EACCD can be applied to cancer patient data to obtain meaningful dendrograms.

## Declarations

### Acknowledgements

Based on "Analysis of an Ensemble Algorithm for Clustering Cancer Data," by Wu, D., Sheng, L., Xu, E., Xing, K., and Chen, D., which appeared in 2012 IEEE International Conference on Bioinformatics and Biomedicine Workshops (BIBMW), 754-755. We gratefully acknowledge the fruitful discussions with Mary Brady and Alden Dima from the National Institute of Standards and Technology and Shujia Zhou from the University of Maryland at Baltimore County.

*Note*: The opinions expressed herein are those of the authors and do not necessarily represent those of the Uniformed Services University of the Health Sciences and the Department of Defense.

**Declarations**

The publication costs for this article were funded by the corresponding author.

This article has been published as part of *BMC Systems Biology* Volume 7 Supplement 4, 2013: Selected articles from the IEEE International Conference on Bioinformatics and Biomedicine 2012: Systems Biology. The full contents of the supplement are available online at http://www.biomedcentral.com/bmcsystbiol/supplements/7/S4.

## Authors’ Affiliations

## References

- Greene FL, Compton CC, Fritz AG, Shah J, Winchester DP: AJCC Cancer Staging Atlas. 2006, SpringerView ArticleGoogle Scholar
- Burke H, Henson D: Critiria for prognostic factors and for an enhanced prognostic system. Cancer. 1993, 72: 3131-3135. 10.1002/1097-0142(19931115)72:10<3131::AID-CNCR2820721039>3.0.CO;2-J.View ArticlePubMedGoogle Scholar
- Burke H, Goodman P, Rosen D, Henson D, Weinstein J, Harrell F: Artificial neural networks improve the accuracy of cancer survival prediction. Cancer. 1997, 79: 857-862. 10.1002/(SICI)1097-0142(19970215)79:4<857::AID-CNCR24>3.0.CO;2-Y.View ArticlePubMedGoogle Scholar
- Burke H: Outcome prediction and the future of the TNM staging system. Journal of the National Cancer Institute. 2004, 96: 1408-1409. 10.1093/jnci/djh293.View ArticlePubMedGoogle Scholar
- Winer E, Carey L, Dowsett M, Tripathy D: Beyond anatomic staging: are we ready to take the leap to molecular classification. 2005 ASCO Annual Meeting. 2005, 46-59.Google Scholar
- Chen D, Xing K, Henson D, Sheng L, Schwartz A, Cheng X: Developing prognostic systems of cancer patients by ensemble clustering. Journal of Biomedicine and Biotechnology. 2009, 7: doi:10.1155/2009/632786Google Scholar
- Wu D, Sheng L, Xu E, Xing K, Chen D: Analysis of an ensemble algorithm for clustering cancer data . Bioinformatics and Biomedicine Workshops (BIBMW), 2012 IEEE International Conference on: 4-7 October 2012. 2012, 754-755. 10.1109/BIBMW.2012.6470233.View ArticleGoogle Scholar
- Klein JP, Moeschberger ML: Survival Analysis: Techniques for Censored and Truncated Data. 2003, New York, USA: SpringerGoogle Scholar
- Kaufman L, Rousseeuw P: Finding Groups in Data: An Introduction to Cluster Analysis. 1990, New York, USA: John Wiley & SonsView ArticleGoogle Scholar
- Hastie T, Tibshirani R, Friedman J: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2001, Springer-VerlagView ArticleGoogle Scholar
- SEER. [http://seer.cancer.gov]
- The R Project for Statistical Computing. [http://www.r-project.org]

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.