EnsembleVoteClassifier

EnsembleVoteClassifier(clfs, voting='hard', weights=None, verbose=0, use_clones=True, fit_base_estimators=True)

Soft Voting/Majority Rule classifier for scikit-learn estimators.

Parameters

  • clfs : array-like, shape = [n_classifiers]

    A list of classifiers. Invoking the fit method on the VotingClassifier will fit clones of those original classifiers be stored in the class attribute if use_clones=True (default) and fit_base_estimators=True (default).

  • voting : str, {'hard', 'soft'} (default='hard')

    If 'hard', uses predicted class labels for majority rule voting. Else if 'soft', predicts the class label based on the argmax of the sums of the predicted probalities, which is recommended for an ensemble of well-calibrated classifiers.

  • weights : array-like, shape = [n_classifiers], optional (default=None)

    Sequence of weights (float or int) to weight the occurances of predicted class labels (hard voting) or class probabilities before averaging (soft voting). Uses uniform weights if None.

  • verbose : int, optional (default=0)

    Controls the verbosity of the building process. - verbose=0 (default): Prints nothing - verbose=1: Prints the number & name of the clf being fitted - verbose=2: Prints info about the parameters of the clf being fitted - verbose>2: Changes verbose param of the underlying clf to self.verbose - 2

  • use_clones : bool (default: True)

    Clones the classifiers for stacking classification if True (default) or else uses the original ones, which will be refitted on the dataset upon calling the fit method. Hence, if use_clones=True, the original input classifiers will remain unmodified upon using the StackingClassifier's fit method. Setting use_clones=False is recommended if you are working with estimators that are supporting the scikit-learn fit/predict API interface but are not compatible to scikit-learn's clone function.

  • fit_base_estimators : bool (default: True)

    Refits classifiers in clfs if True; uses references to the clfs, otherwise (assumes that the classifiers were already fit). Note: fit_base_estimators=False will enforce use_clones to be False, and is incompatible to most scikit-learn wrappers! For instance, if any form of cross-validation is performed this would require the re-fitting classifiers to training folds, which would raise a NotFitterError if fit_base_estimators=False. (New in mlxtend v0.6.)

Attributes

  • classes_ : array-like, shape = [n_predictions]

  • clf : array-like, shape = [n_predictions]

    The input classifiers; may be overwritten if use_clones=False

  • clf_ : array-like, shape = [n_predictions]

    Fitted input classifiers; clones if use_clones=True

Examples

```
>>> import numpy as np
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.naive_bayes import GaussianNB
>>> from sklearn.ensemble import RandomForestClassifier
>>> from mlxtend.sklearn import EnsembleVoteClassifier
>>> clf1 = LogisticRegression(random_seed=1)
>>> clf2 = RandomForestClassifier(random_seed=1)
>>> clf3 = GaussianNB()
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> y = np.array([1, 1, 1, 2, 2, 2])
>>> eclf1 = EnsembleVoteClassifier(clfs=[clf1, clf2, clf3],
... voting='hard', verbose=1)
>>> eclf1 = eclf1.fit(X, y)
>>> print(eclf1.predict(X))
[1 1 1 2 2 2]
>>> eclf2 = EnsembleVoteClassifier(clfs=[clf1, clf2, clf3], voting='soft')
>>> eclf2 = eclf2.fit(X, y)
>>> print(eclf2.predict(X))
[1 1 1 2 2 2]
>>> eclf3 = EnsembleVoteClassifier(clfs=[clf1, clf2, clf3],
...                          voting='soft', weights=[2,1,1])
>>> eclf3 = eclf3.fit(X, y)
>>> print(eclf3.predict(X))
[1 1 1 2 2 2]
>>>

For more usage examples, please see
https://rasbt.github.io/mlxtend/user_guide/classifier/EnsembleVoteClassifier/

```

Methods


fit(X, y, sample_weight=None)

Learn weight coefficients from training data for each classifier.

Parameters

  • X : {array-like, sparse matrix}, shape = [n_samples, n_features]

    Training vectors, where n_samples is the number of samples and n_features is the number of features.

  • y : array-like, shape = [n_samples]

    Target values.

  • sample_weight : array-like, shape = [n_samples], optional

    Sample weights passed as sample_weights to each regressor in the regressors list as well as the meta_regressor. Raises error if some regressor does not support sample_weight in the fit() method.

Returns

  • self : object

fit_transform(X, y=None, fit_params)

Fit to data, then transform it.

Fits transformer to `X` and `y` with optional parameters `fit_params`
and returns a transformed version of `X`.

Parameters

  • X : array-like of shape (n_samples, n_features)

    Input samples.

  • y : array-like of shape (n_samples,) or (n_samples, n_outputs), default=None

    Target values (None for unsupervised transformations).

  • **fit_params : dict

    Additional fit parameters.

Returns

  • X_new : ndarray array of shape (n_samples, n_features_new)

    Transformed array.


get_params(deep=True)

Return estimator parameter names for GridSearch support.


predict(X)

Predict class labels for X.

Parameters

  • X : {array-like, sparse matrix}, shape = [n_samples, n_features]

    Training vectors, where n_samples is the number of samples and n_features is the number of features.

Returns

  • maj : array-like, shape = [n_samples]

    Predicted class labels.


predict_proba(X)

Predict class probabilities for X.

Parameters

  • X : {array-like, sparse matrix}, shape = [n_samples, n_features]

    Training vectors, where n_samples is the number of samples and n_features is the number of features.

Returns

  • avg : array-like, shape = [n_samples, n_classes]

    Weighted average probability for each class per sample.


score(X, y, sample_weight=None)

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy
which is a harsh metric since you require for each sample that
each label set be correctly predicted.

Parameters

  • X : array-like of shape (n_samples, n_features)

    Test samples.

  • y : array-like of shape (n_samples,) or (n_samples, n_outputs)

    True labels for X.

  • sample_weight : array-like of shape (n_samples,), default=None

    Sample weights.

Returns

  • score : float

    Mean accuracy of self.predict(X) w.r.t. y.


set_output(, transform=None)*

Set output container.

See :ref:`sphx_glr_auto_examples_miscellaneous_plot_set_output.py`
for an example on how to use the API.

Parameters

  • transform : {"default", "pandas"}, default=None

    Configure output of transform and fit_transform.

    • "default": Default output format of a transformer
    • "pandas": DataFrame output
    • None: Transform configuration is unchanged

Returns

  • self : estimator instance

    Estimator instance.


set_params(params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects
(such as :class:`~sklearn.pipeline.Pipeline`). The latter have
parameters of the form ``<component>__<parameter>`` so that it's
possible to update each component of a nested object.

Parameters

  • **params : dict

    Estimator parameters.

Returns

  • self : estimator instance

    Estimator instance.


transform(X)

Return class labels or probabilities for X for each estimator.

Parameters

  • X : {array-like, sparse matrix}, shape = [n_samples, n_features]

    Training vectors, where n_samples is the number of samples and n_features is the number of features.

Returns

  • Ifvoting='soft'`` : array-like = [n_classifiers, n_samples, n_classes]

    Class probabilties calculated by each classifier.

  • Ifvoting='hard'`` : array-like = [n_classifiers, n_samples]

    Class labels predicted by each classifier.