association_rules: Association rules generation from frequent itemsets
Function to generate association rules from frequent itemsets
from mlxtend.frequent_patterns import association_rules
Overview
Rule generation is a common task in the mining of frequent patterns. An association rule is an implication expression of the form , where and are disjoint itemsets [1]. A more concrete example based on consumer behaviour would be suggesting that people who buy diapers are also likely to buy beer. To evaluate the "interest" of such an association rule, different metrics have been developed. The current implementation make use of the confidence
and lift
metrics.
Metrics
The currently supported metrics for evaluating association rules and setting selection thresholds are listed below. Given a rule "A -> C", A stands for antecedent and C stands for consequent.
'support':
- introduced in [3]
The support metric is defined for itemsets, not assocication rules. The table produced by the association rule mining algorithm contains three different support metrics: 'antecedent support', 'consequent support', and 'support'. Here, 'antecedent support' computes the proportion of transactions that contain the antecedent A, and 'consequent support' computes the support for the itemset of the consequent C. The 'support' metric then computes the support of the combined itemset A C.
Typically, support is used to measure the abundance or frequency (often interpreted as significance or importance) of an itemset in a database. We refer to an itemset as a "frequent itemset" if you support is larger than a specified minimum-support threshold. Note that in general, due to the downward closure property, all subsets of a frequent itemset are also frequent.
'confidence':
- introduced in [3]
The confidence of a rule A->C is the probability of seeing the consequent in a transaction given that it also contains the antecedent. Note that the metric is not symmetric or directed; for instance, the confidence for A->C is different than the confidence for C->A. The confidence is 1 (maximal) for a rule A->C if the consequent and antecedent always occur together.
'lift':
- introduced in [4]
The lift metric is commonly used to measure how much more often the antecedent and consequent of a rule A->C occur together than we would expect if they were statistically independent. If A and C are independent, the Lift score will be exactly 1.
'leverage':
- introduced in [5]
Leverage computes the difference between the observed frequency of A and C appearing together and the frequency that would be expected if A and C were independent. A leverage value of 0 indicates independence.
'conviction':
- introduced in [6]
A high conviction value means that the consequent is highly depending on the antecedent. For instance, in the case of a perfect confidence score, the denominator becomes 0 (due to 1 - 1) for which the conviction score is defined as 'inf'. Similar to lift, if items are independent, the conviction is 1.
'zhangs_metric':
- introduced in [7]
Measures both association and dissociation. Value ranges between -1 and 1. A positive value (>0) indicates Association and negative value indicated dissociation.
References
[1] Tan, Steinbach, Kumar. Introduction to Data Mining. Pearson New International Edition. Harlow: Pearson Education Ltd., 2014. (pp. 327-414).
[2] Michael Hahsler, https://michael.hahsler.net/research/association_rules/measures.html
[3] R. Agrawal, T. Imielinski, and A. Swami. Mining associations between sets of items in large databases. In Proc. of the ACM SIGMOD Int'l Conference on Management of Data, pages 207-216, Washington D.C., May 1993
[4] S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset counting and implication rules for market basket data
[5] Piatetsky-Shapiro, G., Discovery, analysis, and presentation of strong rules. Knowledge Discovery in Databases, 1991: p. 229-248.
[6] Sergey Brin, Rajeev Motwani, Jeffrey D. Ullman, and Shalom Turk. Dynamic itemset counting and implication rules for market basket data. In SIGMOD 1997, Proceedings ACM SIGMOD International Conference on Management of Data, pages 255-264, Tucson, Arizona, USA, May 1997
[7] Xiaowei Yan , Chengqi Zhang & Shichao Zhang (2009) CONFIDENCE METRICS FOR ASSOCIATION RULE MINING, Applied Artificial Intelligence, 23:8, 713-737 https://www.tandfonline.com/doi/pdf/10.1080/08839510903208062.
Example 1 -- Generating Association Rules from Frequent Itemsets
The generate_rules
takes dataframes of frequent itemsets as produced by the apriori
, fpgrowth
, or fpmax
functions in mlxtend.association. To demonstrate the usage of the generate_rules
method, we first create a pandas DataFrame
of frequent itemsets as generated by the fpgrowth
function:
import pandas as pd
from mlxtend.preprocessing import TransactionEncoder
from mlxtend.frequent_patterns import apriori, fpmax, fpgrowth
dataset = [['Milk', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'],
['Dill', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'],
['Milk', 'Apple', 'Kidney Beans', 'Eggs'],
['Milk', 'Unicorn', 'Corn', 'Kidney Beans', 'Yogurt'],
['Corn', 'Onion', 'Onion', 'Kidney Beans', 'Ice cream', 'Eggs']]
te = TransactionEncoder()
te_ary = te.fit(dataset).transform(dataset)
df = pd.DataFrame(te_ary, columns=te.columns_)
frequent_itemsets = fpgrowth(df, min_support=0.6, use_colnames=True)
### alternatively:
#frequent_itemsets = apriori(df, min_support=0.6, use_colnames=True)
#frequent_itemsets = fpmax(df, min_support=0.6, use_colnames=True)
frequent_itemsets
support | itemsets | |
---|---|---|
0 | 1.0 | (Kidney Beans) |
1 | 0.8 | (Eggs) |
2 | 0.6 | (Yogurt) |
3 | 0.6 | (Onion) |
4 | 0.6 | (Milk) |
5 | 0.8 | (Kidney Beans, Eggs) |
6 | 0.6 | (Kidney Beans, Yogurt) |
7 | 0.6 | (Eggs, Onion) |
8 | 0.6 | (Kidney Beans, Onion) |
9 | 0.6 | (Eggs, Kidney Beans, Onion) |
10 | 0.6 | (Kidney Beans, Milk) |
The generate_rules()
function allows you to (1) specify your metric of interest and (2) the according threshold. Currently implemented measures are confidence and lift. Let's say you are interested in rules derived from the frequent itemsets only if the level of confidence is above the 70 percent threshold (min_threshold=0.7
):
from mlxtend.frequent_patterns import association_rules
association_rules(frequent_itemsets, metric="confidence", min_threshold=0.7)
antecedents | consequents | antecedent support | consequent support | support | confidence | lift | leverage | conviction | zhangs_metric | |
---|---|---|---|---|---|---|---|---|---|---|
0 | (Kidney Beans) | (Eggs) | 1.0 | 0.8 | 0.8 | 0.80 | 1.00 | 0.00 | 1.0 | 0.0 |
1 | (Eggs) | (Kidney Beans) | 0.8 | 1.0 | 0.8 | 1.00 | 1.00 | 0.00 | inf | 0.0 |
2 | (Yogurt) | (Kidney Beans) | 0.6 | 1.0 | 0.6 | 1.00 | 1.00 | 0.00 | inf | 0.0 |
3 | (Eggs) | (Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 |
4 | (Onion) | (Eggs) | 0.6 | 0.8 | 0.6 | 1.00 | 1.25 | 0.12 | inf | 0.5 |
5 | (Onion) | (Kidney Beans) | 0.6 | 1.0 | 0.6 | 1.00 | 1.00 | 0.00 | inf | 0.0 |
6 | (Kidney Beans, Eggs) | (Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 |
7 | (Onion, Eggs) | (Kidney Beans) | 0.6 | 1.0 | 0.6 | 1.00 | 1.00 | 0.00 | inf | 0.0 |
8 | (Kidney Beans, Onion) | (Eggs) | 0.6 | 0.8 | 0.6 | 1.00 | 1.25 | 0.12 | inf | 0.5 |
9 | (Eggs) | (Kidney Beans, Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 |
10 | (Onion) | (Kidney Beans, Eggs) | 0.6 | 0.8 | 0.6 | 1.00 | 1.25 | 0.12 | inf | 0.5 |
11 | (Milk) | (Kidney Beans) | 0.6 | 1.0 | 0.6 | 1.00 | 1.00 | 0.00 | inf | 0.0 |
Example 2 -- Rule Generation and Selection Criteria
If you are interested in rules according to a different metric of interest, you can simply adjust the metric
and min_threshold
arguments . E.g. if you are only interested in rules that have a lift score of >= 1.2, you would do the following:
rules = association_rules(frequent_itemsets, metric="lift", min_threshold=1.2)
rules
antecedents | consequents | antecedent support | consequent support | support | confidence | lift | leverage | conviction | zhangs_metric | |
---|---|---|---|---|---|---|---|---|---|---|
0 | (Eggs) | (Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 |
1 | (Onion) | (Eggs) | 0.6 | 0.8 | 0.6 | 1.00 | 1.25 | 0.12 | inf | 0.5 |
2 | (Kidney Beans, Eggs) | (Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 |
3 | (Kidney Beans, Onion) | (Eggs) | 0.6 | 0.8 | 0.6 | 1.00 | 1.25 | 0.12 | inf | 0.5 |
4 | (Eggs) | (Kidney Beans, Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 |
5 | (Onion) | (Kidney Beans, Eggs) | 0.6 | 0.8 | 0.6 | 1.00 | 1.25 | 0.12 | inf | 0.5 |
Pandas DataFrames
make it easy to filter the results further. Let's say we are ony interested in rules that satisfy the following criteria:
- at least 2 antecedents
- a confidence > 0.75
- a lift score > 1.2
We could compute the antecedent length as follows:
rules["antecedent_len"] = rules["antecedents"].apply(lambda x: len(x))
rules
antecedents | consequents | antecedent support | consequent support | support | confidence | lift | leverage | conviction | zhangs_metric | antecedent_len | |
---|---|---|---|---|---|---|---|---|---|---|---|
0 | (Eggs) | (Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 | 1 |
1 | (Onion) | (Eggs) | 0.6 | 0.8 | 0.6 | 1.00 | 1.25 | 0.12 | inf | 0.5 | 1 |
2 | (Kidney Beans, Eggs) | (Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 | 2 |
3 | (Kidney Beans, Onion) | (Eggs) | 0.6 | 0.8 | 0.6 | 1.00 | 1.25 | 0.12 | inf | 0.5 | 2 |
4 | (Eggs) | (Kidney Beans, Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 | 1 |
5 | (Onion) | (Kidney Beans, Eggs) | 0.6 | 0.8 | 0.6 | 1.00 | 1.25 | 0.12 | inf | 0.5 | 1 |
Then, we can use pandas' selection syntax as shown below:
rules[ (rules['antecedent_len'] >= 2) &
(rules['confidence'] > 0.75) &
(rules['lift'] > 1.2) ]
antecedents | consequents | antecedent support | consequent support | support | confidence | lift | leverage | conviction | zhangs_metric | antecedent_len | |
---|---|---|---|---|---|---|---|---|---|---|---|
3 | (Kidney Beans, Onion) | (Eggs) | 0.6 | 0.8 | 0.6 | 1.0 | 1.25 | 0.12 | inf | 0.5 | 2 |
Similarly, using the Pandas API, we can select entries based on the "antecedents" or "consequents" columns:
rules[rules['antecedents'] == {'Eggs', 'Kidney Beans'}]
antecedents | consequents | antecedent support | consequent support | support | confidence | lift | leverage | conviction | zhangs_metric | antecedent_len | |
---|---|---|---|---|---|---|---|---|---|---|---|
2 | (Kidney Beans, Eggs) | (Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 | 2 |
Frozensets
Note that the entries in the "itemsets" column are of type frozenset
, which is built-in Python type that is similar to a Python set
but immutable, which makes it more efficient for certain query or comparison operations (https://docs.python.org/3.6/library/stdtypes.html#frozenset). Since frozenset
s are sets, the item order does not matter. I.e., the query
rules[rules['antecedents'] == {'Eggs', 'Kidney Beans'}]
is equivalent to any of the following three
rules[rules['antecedents'] == {'Kidney Beans', 'Eggs'}]
rules[rules['antecedents'] == frozenset(('Eggs', 'Kidney Beans'))]
rules[rules['antecedents'] == frozenset(('Kidney Beans', 'Eggs'))]
Example 3 -- Frequent Itemsets with Incomplete Antecedent and Consequent Information
Most metrics computed by association_rules
depends on the consequent and antecedent support score of a given rule provided in the frequent itemset input DataFrame. Consider the following example:
import pandas as pd
dict = {'itemsets': [['177', '176'], ['177', '179'],
['176', '178'], ['176', '179'],
['93', '100'], ['177', '178'],
['177', '176', '178']],
'support':[0.253623, 0.253623, 0.217391,
0.217391, 0.181159, 0.108696, 0.108696]}
freq_itemsets = pd.DataFrame(dict)
freq_itemsets
itemsets | support | |
---|---|---|
0 | [177, 176] | 0.253623 |
1 | [177, 179] | 0.253623 |
2 | [176, 178] | 0.217391 |
3 | [176, 179] | 0.217391 |
4 | [93, 100] | 0.181159 |
5 | [177, 178] | 0.108696 |
6 | [177, 176, 178] | 0.108696 |
Note that this is a "cropped" DataFrame that doesn't contain the support values of the item subsets. This can create problems if we want to compute the association rule metrics for, e.g., 176 => 177
.
For example, the confidence is computed as
But we do not have . All we know about "A"'s support is that it is at least 0.253623.
In these scenarios, where not all metric's can be computed, due to incomplete input DataFrames, you can use the support_only=True
option, which will only compute the support column of a given rule that does not require as much info:
"NaN's" will be assigned to all other metric columns:
from mlxtend.frequent_patterns import association_rules
res = association_rules(freq_itemsets, support_only=True, min_threshold=0.1)
res
antecedents | consequents | antecedent support | consequent support | support | confidence | lift | leverage | conviction | zhangs_metric | |
---|---|---|---|---|---|---|---|---|---|---|
0 | (176) | (177) | NaN | NaN | 0.253623 | NaN | NaN | NaN | NaN | NaN |
1 | (177) | (176) | NaN | NaN | 0.253623 | NaN | NaN | NaN | NaN | NaN |
2 | (179) | (177) | NaN | NaN | 0.253623 | NaN | NaN | NaN | NaN | NaN |
3 | (177) | (179) | NaN | NaN | 0.253623 | NaN | NaN | NaN | NaN | NaN |
4 | (178) | (176) | NaN | NaN | 0.217391 | NaN | NaN | NaN | NaN | NaN |
5 | (176) | (178) | NaN | NaN | 0.217391 | NaN | NaN | NaN | NaN | NaN |
6 | (179) | (176) | NaN | NaN | 0.217391 | NaN | NaN | NaN | NaN | NaN |
7 | (176) | (179) | NaN | NaN | 0.217391 | NaN | NaN | NaN | NaN | NaN |
8 | (100) | (93) | NaN | NaN | 0.181159 | NaN | NaN | NaN | NaN | NaN |
9 | (93) | (100) | NaN | NaN | 0.181159 | NaN | NaN | NaN | NaN | NaN |
10 | (178) | (177) | NaN | NaN | 0.108696 | NaN | NaN | NaN | NaN | NaN |
11 | (177) | (178) | NaN | NaN | 0.108696 | NaN | NaN | NaN | NaN | NaN |
12 | (178, 176) | (177) | NaN | NaN | 0.108696 | NaN | NaN | NaN | NaN | NaN |
13 | (178, 177) | (176) | NaN | NaN | 0.108696 | NaN | NaN | NaN | NaN | NaN |
14 | (177, 176) | (178) | NaN | NaN | 0.108696 | NaN | NaN | NaN | NaN | NaN |
15 | (178) | (177, 176) | NaN | NaN | 0.108696 | NaN | NaN | NaN | NaN | NaN |
16 | (176) | (178, 177) | NaN | NaN | 0.108696 | NaN | NaN | NaN | NaN | NaN |
17 | (177) | (178, 176) | NaN | NaN | 0.108696 | NaN | NaN | NaN | NaN | NaN |
To clean up the representation, you may want to do the following:
res = res[['antecedents', 'consequents', 'support']]
res
antecedents | consequents | support | |
---|---|---|---|
0 | (176) | (177) | 0.253623 |
1 | (177) | (176) | 0.253623 |
2 | (179) | (177) | 0.253623 |
3 | (177) | (179) | 0.253623 |
4 | (178) | (176) | 0.217391 |
5 | (176) | (178) | 0.217391 |
6 | (179) | (176) | 0.217391 |
7 | (176) | (179) | 0.217391 |
8 | (100) | (93) | 0.181159 |
9 | (93) | (100) | 0.181159 |
10 | (178) | (177) | 0.108696 |
11 | (177) | (178) | 0.108696 |
12 | (178, 176) | (177) | 0.108696 |
13 | (178, 177) | (176) | 0.108696 |
14 | (177, 176) | (178) | 0.108696 |
15 | (178) | (177, 176) | 0.108696 |
16 | (176) | (178, 177) | 0.108696 |
17 | (177) | (178, 176) | 0.108696 |
Example 4 -- Pruning Association Rules
There is no specific API for pruning. Instead, the pandas API can be used on the resulting data frame to remove individual rows. E.g., suppose we have the following rules:
import pandas as pd
from mlxtend.preprocessing import TransactionEncoder
from mlxtend.frequent_patterns import apriori, fpmax, fpgrowth
from mlxtend.frequent_patterns import association_rules
dataset = [['Milk', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'],
['Dill', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'],
['Milk', 'Apple', 'Kidney Beans', 'Eggs'],
['Milk', 'Unicorn', 'Corn', 'Kidney Beans', 'Yogurt'],
['Corn', 'Onion', 'Onion', 'Kidney Beans', 'Ice cream', 'Eggs']]
te = TransactionEncoder()
te_ary = te.fit(dataset).transform(dataset)
df = pd.DataFrame(te_ary, columns=te.columns_)
frequent_itemsets = fpgrowth(df, min_support=0.6, use_colnames=True)
rules = association_rules(frequent_itemsets, metric="lift", min_threshold=1.2)
rules
antecedents | consequents | antecedent support | consequent support | support | confidence | lift | leverage | conviction | zhangs_metric | |
---|---|---|---|---|---|---|---|---|---|---|
0 | (Eggs) | (Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 |
1 | (Onion) | (Eggs) | 0.6 | 0.8 | 0.6 | 1.00 | 1.25 | 0.12 | inf | 0.5 |
2 | (Kidney Beans, Eggs) | (Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 |
3 | (Kidney Beans, Onion) | (Eggs) | 0.6 | 0.8 | 0.6 | 1.00 | 1.25 | 0.12 | inf | 0.5 |
4 | (Eggs) | (Kidney Beans, Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 |
5 | (Onion) | (Kidney Beans, Eggs) | 0.6 | 0.8 | 0.6 | 1.00 | 1.25 | 0.12 | inf | 0.5 |
and we want to remove the rule "(Onion, Kidney Beans) -> (Eggs)". In order to to this, we can define selection masks and remove this row as follows:
antecedent_sele = rules['antecedents'] == frozenset({'Onion', 'Kidney Beans'}) # or frozenset({'Kidney Beans', 'Onion'})
consequent_sele = rules['consequents'] == frozenset({'Eggs'})
final_sele = (antecedent_sele & consequent_sele)
rules.loc[ ~final_sele ]
antecedents | consequents | antecedent support | consequent support | support | confidence | lift | leverage | conviction | zhangs_metric | |
---|---|---|---|---|---|---|---|---|---|---|
0 | (Eggs) | (Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 |
1 | (Onion) | (Eggs) | 0.6 | 0.8 | 0.6 | 1.00 | 1.25 | 0.12 | inf | 0.5 |
2 | (Kidney Beans, Eggs) | (Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 |
4 | (Eggs) | (Kidney Beans, Onion) | 0.8 | 0.6 | 0.6 | 0.75 | 1.25 | 0.12 | 1.6 | 1.0 |
5 | (Onion) | (Kidney Beans, Eggs) | 0.6 | 0.8 | 0.6 | 1.00 | 1.25 | 0.12 | inf | 0.5 |