mlxtend version: 0.9.2dev
ExhaustiveFeatureSelector
ExhaustiveFeatureSelector(estimator, min_features=1, max_features=1, print_progress=True, scoring='accuracy', cv=5, n_jobs=1, pre_dispatch='2n_jobs', clone_estimator=True)*
Exhaustive Feature Selection for Classification and Regression. (new in v0.4.3)
Parameters

estimator
: scikitlearn classifier or regressor 
min_features
: int (default: 1)Minumum number of features to select

max_features
: int (default: 1)Maximum number of features to select

print_progress
: bool (default: True)Prints progress as the number of epochs to stderr.

scoring
: str, (default='accuracy')Scoring metric in {accuracy, f1, precision, recall, roc_auc} for classifiers, {'mean_absolute_error', 'mean_squared_error', 'median_absolute_error', 'r2'} for regressors, or a callable object or function with signature
scorer(estimator, X, y)
. 
cv
: int (default: 5)Scikitlearn crossvalidation generator or
int
. If estimator is a classifier (or y consists of integer class labels), stratified kfold is performed, and regular kfold crossvalidation otherwise. No crossvalidation if cv is None, False, or 0. 
n_jobs
: int (default: 1)The number of CPUs to use for evaluating different feature subsets in parallel. 1 means 'all CPUs'.

pre_dispatch
: int, or string (default: '2*n_jobs')Controls the number of jobs that get dispatched during parallel execution if
n_jobs > 1
orn_jobs=1
. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be: None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fastrunning jobs, to avoid delays due to ondemand spawning of the jobs An int, giving the exact number of total jobs that are spawned A string, giving an expression as a function of n_jobs, as in2*n_jobs

clone_estimator
: bool (default: True)Clones estimator if True; works with the original estimator instance if False. Set to False if the estimator doesn't implement scikitlearn's set_params and get_params methods. In addition, it is required to set cv=0, and n_jobs=1.
Attributes

best_idx_
: arraylike, shape = [n_predictions]Feature Indices of the selected feature subsets.

best_score_
: floatCross validation average score of the selected subset.

subsets_
: dictA dictionary of selected feature subsets during the sequential selection, where the dictionary keys are the lengths k of these feature subsets. The dictionary values are dictionaries themselves with the following keys: 'feature_idx' (tuple of indices of the feature subset) 'cv_scores' (list individual crossvalidation scores) 'avg_score' (average crossvalidation score)
Methods
fit(X, y)
Perform feature selection and learn model from training data.
Parameters

X
: {arraylike, sparse matrix}, shape = [n_samples, n_features]Training vectors, where n_samples is the number of samples and n_features is the number of features.

y
: arraylike, shape = [n_samples]Target values.
Returns
self
: object
fit_transform(X, y)
Fit to training data and return the best selected features from X.
Parameters

X
: {arraylike, sparse matrix}, shape = [n_samples, n_features]Training vectors, where n_samples is the number of samples and n_features is the number of features.
Returns
Feature subset of X, shape={n_samples, k_features}
get_metric_dict(confidence_interval=0.95)
Return metric dictionary
Parameters

confidence_interval
: float (default: 0.95)A positive float between 0.0 and 1.0 to compute the confidence interval bounds of the CV score averages.
Returns
Dictionary with items where each dictionary value is a list with the number of iterations (number of feature subsets) as its length. The dictionary keys corresponding to these lists are as follows: 'feature_idx': tuple of the indices of the feature subset 'cv_scores': list with individual CV scores 'avg_score': of CV average scores 'std_dev': standard deviation of the CV score average 'std_err': standard error of the CV score average 'ci_bound': confidence interval bound of the CV score average
get_params(deep=True)
Get parameters for this estimator.
Parameters

deep
: boolean, optionalIf True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns

params
: mapping of string to anyParameter names mapped to their values.
set_params(params)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects
(such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it's possible to update each
component of a nested object.
Returns
self
transform(X)
Return the best selected features from X.
Parameters

X
: {arraylike, sparse matrix}, shape = [n_samples, n_features]Training vectors, where n_samples is the number of samples and n_features is the number of features.
Returns
Feature subset of X, shape={n_samples, k_features}
ColumnSelector
ColumnSelector(cols=None)
Base class for all estimators in scikitlearn
Notes
All estimators should specify all the parameters that can be set
at the class level in their __init__
as explicit keyword
arguments (no *args
or **kwargs
).
Methods
fit(X, y=None)
Mock method. Does nothing.
Parameters

X
: {arraylike, sparse matrix}, shape = [n_samples, n_features]Training vectors, where n_samples is the number of samples and n_features is the number of features.

y
: arraylike, shape = [n_samples] (default: None)
Returns
self
fit_transform(X, y=None)
Return a slice of the input array.
Parameters

X
: {arraylike, sparse matrix}, shape = [n_samples, n_features]Training vectors, where n_samples is the number of samples and n_features is the number of features.

y
: arraylike, shape = [n_samples] (default: None)
Returns

X_slice
: shape = [n_samples, k_features]Subset of the feature space where k_features <= n_features
get_params(deep=True)
Get parameters for this estimator.
Parameters

deep
: boolean, optionalIf True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns

params
: mapping of string to anyParameter names mapped to their values.
set_params(params)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects
(such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it's possible to update each
component of a nested object.
Returns
self
transform(X, y=None)
Return a slice of the input array.
Parameters

X
: {arraylike, sparse matrix}, shape = [n_samples, n_features]Training vectors, where n_samples is the number of samples and n_features is the number of features.

y
: arraylike, shape = [n_samples] (default: None)
Returns

X_slice
: shape = [n_samples, k_features]Subset of the feature space where k_features <= n_features
SequentialFeatureSelector
SequentialFeatureSelector(estimator, k_features=1, forward=True, floating=False, verbose=0, scoring=None, cv=5, n_jobs=1, pre_dispatch='2n_jobs', clone_estimator=True)*
Sequential Feature Selection for Classification and Regression.
Parameters

estimator
: scikitlearn classifier or regressor 
k_features
: int or tuple or str (default: 1)Number of features to select, where k_features < the full feature set. New in 0.4.2: A tuple containing a min and max value can be provided, and the SFS will consider return any feature combination between min and max that scored highest in crossvalidtion. For example, the tuple (1, 4) will return any combination from 1 up to 4 features instead of a fixed number of features k. New in 0.8.0: A string argument "best" or "parsimonious". If "best" is provided, the feature selector will return the feature subset with the best crossvalidation performance. If "parsimonious" is provided as an argument, the smallest feature subset that is within one standard error of the crossvalidation performance will be selected.

forward
: bool (default: True)Forward selection if True, backward selection otherwise

floating
: bool (default: False)Adds a conditional exclusion/inclusion if True.

verbose
: int (default: 0), level of verbosity to use in logging.If 0, no output, if 1 number of features in current set, if 2 detailed logging i ncluding timestamp and cv scores at step.

scoring
: str, callable, or None (default: None)If None (default), uses 'accuracy' for sklearn classifiers and 'r2' for sklearn regressors. If str, uses a sklearn scoring metric string identifier, for example {accuracy, f1, precision, recall, roc_auc} for classifiers, {'mean_absolute_error', 'mean_squared_error'/'neg_mean_squared_error', 'median_absolute_error', 'r2'} for regressors. If a callable object or function is provided, it has to be conform with sklearn's signature
scorer(estimator, X, y)
; see http://scikitlearn.org/stable/modules/generated/sklearn.metrics.make_scorer.html for more information. 
cv
: int (default: 5)Scikitlearn crossvalidation generator or
int
. If estimator is a classifier (or y consists of integer class labels), stratified kfold is performed, and regular kfold crossvalidation otherwise. No crossvalidation if cv is None, False, or 0. 
n_jobs
: int (default: 1)The number of CPUs to use for evaluating different feature subsets in parallel. 1 means 'all CPUs'.

pre_dispatch
: int, or string (default: '2*n_jobs')Controls the number of jobs that get dispatched during parallel execution if
n_jobs > 1
orn_jobs=1
. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be: None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fastrunning jobs, to avoid delays due to ondemand spawning of the jobs An int, giving the exact number of total jobs that are spawned A string, giving an expression as a function of n_jobs, as in2*n_jobs

clone_estimator
: bool (default: True)Clones estimator if True; works with the original estimator instance if False. Set to False if the estimator doesn't implement scikitlearn's set_params and get_params methods. In addition, it is required to set cv=0, and n_jobs=1.
Attributes

k_feature_idx_
: arraylike, shape = [n_predictions]Feature Indices of the selected feature subsets.

k_score_
: floatCross validation average score of the selected subset.

subsets_
: dictA dictionary of selected feature subsets during the sequential selection, where the dictionary keys are the lengths k of these feature subsets. The dictionary values are dictionaries themselves with the following keys: 'feature_idx' (tuple of indices of the feature subset) 'cv_scores' (list individual crossvalidation scores) 'avg_score' (average crossvalidation score)
Methods
fit(X, y)
Perform feature selection and learn model from training data.
Parameters

X
: {arraylike, sparse matrix}, shape = [n_samples, n_features]Training vectors, where n_samples is the number of samples and n_features is the number of features.

y
: arraylike, shape = [n_samples]Target values.
Returns
self
: object
fit_transform(X, y)
Fit to training data then reduce X to its most important features.
Parameters

X
: {arraylike, sparse matrix}, shape = [n_samples, n_features]Training vectors, where n_samples is the number of samples and n_features is the number of features.
Returns
Reduced feature subset of X, shape={n_samples, k_features}
get_metric_dict(confidence_interval=0.95)
Return metric dictionary
Parameters

confidence_interval
: float (default: 0.95)A positive float between 0.0 and 1.0 to compute the confidence interval bounds of the CV score averages.
Returns
Dictionary with items where each dictionary value is a list with the number of iterations (number of feature subsets) as its length. The dictionary keys corresponding to these lists are as follows: 'feature_idx': tuple of the indices of the feature subset 'cv_scores': list with individual CV scores 'avg_score': of CV average scores 'std_dev': standard deviation of the CV score average 'std_err': standard error of the CV score average 'ci_bound': confidence interval bound of the CV score average
get_params(deep=True)
Get parameters for this estimator.
Parameters

deep
: boolean, optionalIf True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns

params
: mapping of string to anyParameter names mapped to their values.
set_params(params)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects
(such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it's possible to update each
component of a nested object.
Returns
self
transform(X)
Reduce X to its most important features.
Parameters

X
: {arraylike, sparse matrix}, shape = [n_samples, n_features]Training vectors, where n_samples is the number of samples and n_features is the number of features.
Returns
Reduced feature subset of X, shape={n_samples, k_features}
plot_sequential_feature_selection
plot_sequential_feature_selection(args)
Note that importing this function from mlxtend.evaluate has been
deprecated and will not longer be supported in mlxtend 0.6.
Please use from mlxtend.plotting import plot_sequential_feature_selection
instead.