KMedoids

class baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids[source]

Bases: BaseEstimator, ClusterMixin, TransformerMixin

k-medoids clustering.

Read more in the User Guide.

Parameters:
  • n_clusters (int, optional, default: 8) – The number of clusters to form as well as the number of medoids to generate.

  • metric (string, or callable, optional, default: 'euclidean') – What distance metric to use. See :func:metrics.pairwise_distances metric can be ‘precomputed’, the user must then feed the fit method with a precomputed kernel matrix and not the design matrix X.

  • method ({'alternate', 'pam'}, default: 'alternate') – Which algorithm to use. ‘alternate’ is faster while ‘pam’ is more accurate.

  • init ({'random', 'heuristic', 'k-medoids++', 'build'}, or array-like of shape) –

    (n_clusters, n_features), optional, default: ‘heuristic’ Specify medoid initialization method. ‘random’ selects n_clusters elements from the dataset. ‘heuristic’ picks the n_clusters points with the smallest sum distance to every other point. ‘k-medoids++’ follows an approach based on k-means++_, and in general, gives initial medoids which are more separated than those generated by the other methods. ‘build’ is a greedy initialization of the medoids used in the original PAM algorithm. Often ‘build’ is more efficient but slower than other initializations on big datasets and it is also very non-robust, if there are outliers in the dataset, use another initialization. If an array is passed, it should be of shape (n_clusters, n_features) and gives the initial centers.

  • max_iter (int, optional, default : 300) – Specify the maximum number of iterations when fitting. It can be zero in which case only the initialization is computed which may be suitable for large datasets when the initialization is sufficiently efficient (i.e. for ‘build’ init).

  • random_state (int, RandomState instance or None, optional) – Specify random state for the random number generator. Used to initialise medoids when init=’random’.

  • Attributes

  • ----------

  • cluster_centers_ (array, shape = (n_clusters, n_features)) – or None if metric == ‘precomputed’ Cluster centers, i.e. medoids (elements from the original dataset)

  • medoid_indices_ (array, shape = (n_clusters,)) – The indices of the medoid rows in X

  • labels_ (array, shape = (n_samples,)) – Labels of each point

  • inertia_ (float) – Sum of distances of samples to their closest cluster center.

  • Examples

  • --------

  • KMedoids (>>> _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.from baybe.utils.clustering_algorithms import)

  • np (>>> _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.import numpy as)

  • np.asarray([[1 (>>> _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.X =)

  • 2]

  • KMedoids.[1

  • 4]

  • KMedoids.[1

  • 0]

:param : :param … _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids. [4: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.2]: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.[4: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.4]: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.[4: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.0]]): :param >>> _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.kmedoids = KMedoids(n_clusters=2: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.random_state=0).fit(X): :param >>> _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.kmedoids.labels_: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.array([0: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.0: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.0: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.1: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.1: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.1]): :param >>> _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.kmedoids.predict([[0: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.0]: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.[4: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.4]]): :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.array([0: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.1]): :param >>> _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.kmedoids.cluster_centers_: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.array([[1.: [4., 2.]]) :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.2.]: [4., 2.]]) :param : [4., 2.]]) :param >>> _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.kmedoids.inertia_: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.8.0: :param See _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.scikit-learn-extra/examples/plot_kmedoids_digits.py for examples: :param of _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.KMedoids with various distance metrics.: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.References: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.———-: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.Maranzana: transportation costs. IBM Systems Journal, 2(2), pp.129-135. :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.F.E.: transportation costs. IBM Systems Journal, 2(2), pp.129-135. :param 1963. _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.On the location of supply points to minimize: transportation costs. IBM Systems Journal, 2(2), pp.129-135. :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.Park: clustering. Expert systems with applications, 36(2), pp.3336-3341. :param H.S.and _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.Jun: clustering. Expert systems with applications, 36(2), pp.3336-3341. :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.C.H.: clustering. Expert systems with applications, 36(2), pp.3336-3341. :param 2009. _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.A simple and fast algorithm for K-medoids: clustering. Expert systems with applications, 36(2), pp.3336-3341. :param See _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.Also: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.——–: :param _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.KMeans: The KMeans algorithm minimizes the within-cluster sum-of-squares

criterion. It scales well to large number of samples.

Parameters:
  • Notes

  • -----

  • for (Since _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.all pairwise distances are calculated and stored in memory)

  • fit (the _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.duration of)

  • 2). (the _sphinx_paramlinks_baybe.utils.clustering_algorithms.third_party.kmedoids.KMedoids.space complexity is O(n_samples **)

Public methods

__init__([n_clusters, metric, method, init, ...])

fit(X[, y])

Fit K-Medoids to the provided data.

fit_predict(X[, y])

Perform clustering on X and returns cluster labels.

fit_transform(X[, y])

Fit to data, then transform it.

get_metadata_routing()

Get metadata routing of this object.

get_params([deep])

Get parameters for this estimator.

predict(X)

Predict the closest cluster for each sample in X.

set_output(*[, transform])

Set output container.

set_params(**params)

Set the parameters of this estimator.

transform(X)

Transforms X to cluster-distance space.

__init__(n_clusters=8, metric='euclidean', method='alternate', init='heuristic', max_iter=300, random_state=None)[source]
fit(X, y=None)[source]

Fit K-Medoids to the provided data.

Parameters:
  • X ({array-like, sparse matrix}, shape = (n_samples, n_features), or (n_samples, n_samples) if metric == 'precomputed') – Dataset to cluster.

  • y (Ignored)

  • Returns

  • -------

  • self

fit_predict(X, y=None, **kwargs)

Perform clustering on X and returns cluster labels.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Input data.

  • y (Ignored) – Not used, present for API consistency by convention.

  • **kwargs (dict) –

    Arguments to be passed to fit.

    Added in version 1.4.

Returns:

labels – Cluster labels.

Return type:

ndarray of shape (n_samples,), dtype=np.int64

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Input samples.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs), default=None) – Target values (None for unsupervised transformations).

  • **fit_params (dict) – Additional fit parameters.

Returns:

X_new – Transformed array.

Return type:

ndarray array of shape (n_samples, n_features_new)

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:

routing – A MetadataRequest encapsulating routing information.

Return type:

MetadataRequest

get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params – Parameter names mapped to their values.

Return type:

dict

predict(X)[source]

Predict the closest cluster for each sample in X.

Parameters:
  • X ({array-like, sparse matrix}, shape (n_query, n_features), or (n_query, n_indexed) if metric == 'precomputed') – New data to predict.

  • Returns

  • -------

  • labels (array, shape = (n_query,)) – Index of the cluster each sample belongs to.

set_output(*, transform=None)

Set output container.

See Introducing the set_output API for an example on how to use the API.

Parameters:

transform ({"default", "pandas", "polars"}, default=None) –

Configure output of transform and fit_transform.

  • ”default”: Default output format of a transformer

  • ”pandas”: DataFrame output

  • ”polars”: Polars output

  • None: Transform configuration is unchanged

Added in version 1.4: “polars” option was added.

Returns:

self – Estimator instance.

Return type:

estimator instance

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:

**params (dict) – Estimator parameters.

Returns:

self – Estimator instance.

Return type:

estimator instance

transform(X)[source]

Transforms X to cluster-distance space.

Parameters:
  • X ({array-like, sparse matrix}, shape (n_query, n_features), or (n_query, n_indexed) if metric == 'precomputed') – Data to transform.

  • Returns

  • -------

  • X_new ({array-like, sparse matrix}, shape=(n_query, n_clusters)) – X transformed in the new space of distances to cluster centers.