<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="6.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Francis R. Bach</style></author><author><style face="normal" font="default" size="100%">Michael I. Jordan</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Learning Spectral Clustering, WIth Application To Speech Separation</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Machine Learning Research 7</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2006</style></year><pub-dates><date><style  face="normal" font="default" size="100%">10/2006</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.cs.berkeley.edu/~jordan/papers/sgp-jmlr.pdf</style></url></web-urls></urls><abstract><style face="normal" font="default" size="100%">Spectral clustering refers to a class of techniques which rely on the eigenstructure of a similarity 
matrix to partition points into disjoint clusters, with points in the same cluster having high similarity 
and points in different clusters having low similarity. In this paper, we derive new cost functions 
for spectral clustering based on measures of error between a given partition and a solution of the 
spectral relaxation of a minimum normalized cut problem. Minimizing these cost functions with 
respect to the partition leads to new spectral clustering algorithms. Minimizing with respect to the 
similarity matrix leads to algorithms for learning the similarity matrix from fully labelled data sets. 
We apply our learning algorithm to the blind one-microphone speech separation problem, casting 
the problem as one of segmentation of the spectrogram. 
</style></abstract></record></records></xml>