scikit-笔记09:案例学习-泰坦尼克号幸存者
Table of Contents
1 Case Study - Titanic Survival
1.1 Feature Extraction
Here we will talk about an important piece of machine learning: the extraction of quantitative features from data. By the end of this section you will
- Know how features are extracted from real-world data.
- See an example of extracting numerical features from textual data
In addition, we will go over several basic tools within scikit-learn which can be used to accomplish the above tasks.
1.2 What Are Features?
1.2.1 Numerical Features
Recall that data in scikit-learn is expected to be in two-dimensional arrays, of
size n_samples × n_features
.
Previously, we looked at the iris dataset, which has 150 samples and 4 features
from sklearn.datasets import load_iris iris = load_iris() print(iris.data.shape)
These features are:
- sepal length in cm
- sepal width in cm
- petal length in cm
- petal width in cm
Numerical features such as these are pretty straightforward: each sample contains a list of floating-point numbers corresponding to the features
1.2.2 Categorical Features
What if you have categorical features? For example, imagine there is data on the color of each iris:
color in [red, blue, purple]
1.2.2.1 It's bad to give categorical feature the numberical encoding
You might be tempted to assign numbers to these features,
i.e.
- red=1,
- blue=2,
- purple=3
but in general this is a bad idea. Estimators tend to operate under the assumption that there is a numerical relationship lie on numberical features , so, for example,
- 1 and 2 are more alike than 1 and 3,
- 1 + 2 = 3
- etc.
and this is often not the case for categorical features.
1.2.2.2 two kinds of categorical feature encoding strategy
- nominal categorical feature : In fact, the example above is a subcategory of "categorical" features, namely, "nominal" features. Nominal features don't imply an order.
- ordinal categorical feature : are categorical features that do imply an order. An example of ordinal features would be T-shirt sizes, e.g., XL > L > M > S.
One work-around for parsing nominal features into a format that prevents the classification algorithm from asserting an order is the so-called one-hot encoding representation. Here, we give each category its own dimension.
The enriched iris feature set would hence be in this case:
- sepal length in cm
- sepal width in cm
- petal length in cm
- petal width in cm
- color=purple (1.0 or 0.0)
- color=blue (1.0 or 0.0)
- color=red (1.0 or 0.0)
Note that using many of these categorical features may result in data which is better represented as a sparse matrix, as we'll see with the text classification example below.
1.2.3 Using the DictVectorizer
to encode Categorical features
When the source data is encoded has a list of dicts where the values are either
strings names for categories or numerical values, you can use the
DictVectorizer
class to compute the boolean expansion of the categorical
features while leaving the numerical features unimpacted:
measurements = [ {'city': 'Dubai', 'temperature': 33.}, {'city': 'London', 'temperature': 12.}, {'city': 'San Francisco', 'temperature': 18.}, ]
from sklearn.feature_extraction import DictVectorizer vec = DictVectorizer() vec
DictVectorizer(dtype=<class 'numpy.float64'>, separator='=', sort=True, sparse=True)
vec.fit_transform(measurements).toarray()
array([[ 1., 0., 0., 33.], [ 0., 1., 0., 12.], [ 0., 0., 1., 18.]])
vec.get_feature_names()
['city=Dubai', 'city=London', 'city=San Francisco', 'temperature']
1.2.4 Derived Features
Another common feature type are derived features, where some pre-processing step is applied to the data to generate features that are somehow more informative. Derived features may be based in feature extraction and dimensionality reduction (such as PCA or manifold learning), may be linear or nonlinear combinations of features (such as in polynomial regression), or may be some more sophisticated transform of the features.
1.3 Combining Numerical and Categorical Features
As an example of how to work with both categorical and numerical data, we will perform survival predicition for the passengers of the HMS Titanic.
We will use a version of the Titanic (titanic3.xls) from here. We converted the .xls to .csv for easier manipulation but left the data is otherwise unchanged.
We need to read in all the lines from the (titanic3.csv) file, set aside the keys from the first line, and find our labels (who survived or died) and data (attributes of that person). Let's look at the keys and some corresponding example lines.
import os import pandas as pd titanic = pd.read_csv(os.path.join('datasets', 'titanic3.csv')) print(titanic.columns)
Here is a broad description of the keys and what they mean:
pclass | Passenger Class |
(1 = 1st; 2 = 2nd; 3 = 3rd) | |
survival | Survival |
(0 = No; 1 = Yes) | |
name | Name |
sex | Sex |
age | Age |
sibsp | Number of Siblings/Spouses Aboard |
parch | Number of Parents/Children Aboard |
ticket | Ticket Number |
fare | Passenger Fare |
cabin | Cabin |
embarked | Port of Embarkation |
(C = Cherbourg; Q = Queenstown; S = Southampton) | |
boat | Lifeboat |
body | Body Identification Number |
home.dest | Home/Destination |
In general, it looks like name, sex, cabin, embarked, boat, body, and homedest may be candidates for categorical features, while the rest appear to be numerical features. We can also look at the first couple of rows in the dataset to get a better understanding:
titanic.head()
pclass survived name sex \ 0 1 1 Allen, Miss. Elisabeth Walton female 1 1 1 Allison, Master. Hudson Trevor male 2 1 0 Allison, Miss. Helen Loraine female 3 1 0 Allison, Mr. Hudson Joshua Creighton male 4 1 0 Allison, Mrs. Hudson J C (Bessie Waldo Daniels) female age sibsp parch ticket fare cabin embarked boat body \ 0 29.0000 0 0 24160 211.3375 B5 S 2 NaN 1 0.9167 1 2 113781 151.5500 C22 C26 S 11 NaN 2 2.0000 1 2 113781 151.5500 C22 C26 S NaN NaN 3 30.0000 1 2 113781 151.5500 C22 C26 S NaN 135.0 4 25.0000 1 2 113781 151.5500 C22 C26 S NaN NaN home.dest 0 St Louis, MO 1 Montreal, PQ / Chesterville, ON 2 Montreal, PQ / Chesterville, ON 3 Montreal, PQ / Chesterville, ON 4 Montreal, PQ / Chesterville, ON
We clearly want to discard the "boat" and "body" columns for any classification into survived vs not survived as they already contain this information. The name is unique to each person (probably) and also non-informative. For a first try, we will use "pclass", "sibsp", "parch", "fare" and "embarked" as our features:
# print(type(titanic)) #<- DataFrame # print(type(titanic.survived)) #<- Series # print(type(titanic.survived.values)) #<- ndarray labels = titanic.survived.values features = titanic[['pclass', 'sex', 'age', 'sibsp', 'parch', 'fare', 'embarked']]
features.head()
pclass sex age sibsp parch fare embarked 0 1 female 29.0000 0 0 211.3375 S 1 1 male 0.9167 1 2 151.5500 S 2 1 female 2.0000 1 2 151.5500 S 3 1 male 30.0000 1 2 151.5500 S 4 1 female 25.0000 1 2 151.5500 S
The data now contains only useful features, but they are not in a format that the machine learning algorithms can understand. We need to transform the strings "male" and "female" into binary variables that indicate the gender, and similarly for "embarked". We can do that using the pandas get_dummies function:
pd.get_dummies(features).head()
pclass age sibsp parch fare sex_female sex_male embarked_C \ 0 1 29.0000 0 0 211.3375 1 0 0 1 1 0.9167 1 2 151.5500 0 1 0 2 1 2.0000 1 2 151.5500 1 0 0 3 1 30.0000 1 2 151.5500 0 1 0 4 1 25.0000 1 2 151.5500 1 0 0 embarked_Q embarked_S 0 0 1 1 0 1 2 0 1 3 0 1 4 0 1
This transformation successfully encoded the string columns. However, one might argue that the class is also a categorical variable. We can explicitly list the columns to encode using the columns parameter, and include pclass:
features_dummies = pd.get_dummies(features, columns=['pclass', 'sex', 'embarked']) features_dummies.head(n=16)
age sibsp parch fare pclass_1 pclass_2 pclass_3 sex_female \ 0 29.0000 0 0 211.3375 1 0 0 1 1 0.9167 1 2 151.5500 1 0 0 0 2 2.0000 1 2 151.5500 1 0 0 1 3 30.0000 1 2 151.5500 1 0 0 0 4 25.0000 1 2 151.5500 1 0 0 1 5 48.0000 0 0 26.5500 1 0 0 0 6 63.0000 1 0 77.9583 1 0 0 1 7 39.0000 0 0 0.0000 1 0 0 0 8 53.0000 2 0 51.4792 1 0 0 1 9 71.0000 0 0 49.5042 1 0 0 0 10 47.0000 1 0 227.5250 1 0 0 0 11 18.0000 1 0 227.5250 1 0 0 1 12 24.0000 0 0 69.3000 1 0 0 1 13 26.0000 0 0 78.8500 1 0 0 1 14 80.0000 0 0 30.0000 1 0 0 0 15 NaN 0 0 25.9250 1 0 0 0 sex_male embarked_C embarked_Q embarked_S 0 0 0 0 1 1 1 0 0 1 2 0 0 0 1 3 1 0 0 1 4 0 0 0 1 5 1 0 0 1 6 0 0 0 1 7 1 0 0 1 8 0 0 0 1 9 1 1 0 0 10 1 1 0 0 11 0 1 0 0 12 0 1 0 0 13 0 0 0 1 14 1 0 0 1 15 1 0 0 1
data = features_dummies.values
import numpy as np np.isnan(data).any()
True
With all of the hard data loading work out of the way, evaluating a classifier
on this data becomes straightforward. Setting up the simplest possible model, we
want to see what the simplest score can be with DummyClassifier
.
from sklearn.model_selection import train_test_split from sklearn.preprocessing import Imputer train_data, test_data, train_labels, test_labels = train_test_split( data, labels, random_state=0) imp = Imputer() imp.fit(train_data) train_data_finite = imp.transform(train_data) test_data_finite = imp.transform(test_data)
np.isnan(train_data_finite).any()
False
from sklearn.dummy import DummyClassifier clf = DummyClassifier('most_frequent') clf.fit(train_data_finite, train_labels) print("Prediction accuracy: %f" % clf.score(test_data_finite, test_labels))
EXERCISE: Try executing the above classification, using LogisticRegression and RandomForestClassifier instead of DummyClassifier Does selecting a different subset of features help?
2 Misc tools
2.1 Scikit-leanr
2.1.1 ML models by now
- from sklearn.datasets import make_blobs
- from sklearn.datasets import load_iris
- from sklearn.model_selection import train_test_split
- from sklearn.linear_model import LogisticRegression
- from sklearn.linear_model import LinearRegression
- from sklearn.neighbors import KNeighborsClassifier
- from sklearn.neighbors import KNeighborsRegressor
- from sklearn.preprocessing import StandardScaler
- from sklearn.decomposition import PCA
- from sklearn.metrics import confusion_matrix, accuracy_score
- from sklearn.metrics import adjusted_rand_score
- from sklearn.cluster import KMeans
- from sklearn.cluster import KMeans
- from sklearn.cluster import MeanShift
- from sklearn.cluster import DBSCAN # <<< this algorithm has related sources in LIHONGYI's lecture-12
- from sklearn.cluster import AffinityPropagation
- from sklearn.cluster import SpectralClustering
- from sklearn.cluster import Ward
- from sklearn.metrics import confusion_matrix
- from sklearn.metrics import accuracy_score
- from sklearn.metrics import adjusted_rand_score
- from sklearn.feature_extraction import DictVectorizer
- from sklearn.preprocessing import Imputer
- from sklearn.dummy import DummyClassifier
2.1.2 ML fn of this note
- using DictVectorizer vec.get_feature_names()
from sklearn.feature_extraction import DictVectorizer vec = DictVectorizer() vec.fit_transform(measurements).toarray()
np.isnan(data).any()
print(type(titanic)) #<- DataFrame print(type(titanic.survived)) #<- Series print(type(titanic.survived.values)) #<- ndarray
2.1.3 DictVectorizer
from sklearn.feature_extraction import DictVectorizer measurements = [ {'city': 'Dubai', 'temperature': 33.}, {'city': 'London', 'temperature': 12.}, {'city': 'San Francisco', 'temperature': 18.}, ] # an array of dicts dv = DictVectorizer() vec = dv.fit_transform(measurements).toarray() vec
array([[ 1., 0., 0., 33.], [ 0., 1., 0., 12.], [ 0., 0., 1., 18.]])
. . string value: numerical value: . do one-hot encoding do nothing . -------------------–— --------------–— . {'city': 'Dubai', 'temperature': 33.}, . {'city': 'London', 'temperature': 12.}, . {'city': 'San Francisco', 'temperature': 18.}, . . . . one-hot encoding for 'city' no encoding for 'temperature' . ---------–— – . array([[ 1., 0., 0., 33.], . [ 0., 1., 0., 12.], . [ 0., 0., 1., 18.]]) .
2.1.4 Imputer
http://scikit-learn.org/stable/modules/preprocessing.html#imputation-of-missing-values
Imputer(missing_values=’NaN’, #<- what's a missing value in original ndarray strategy=’mean’, #<- with what value should missing_values change axis=0, #<- 0: strategy on column; 1: strategy on row; verbose=0, copy=True) #<- False: do change in original ndarray; # True: do change on a copy.
Note that, the place holder like 'NaN' or 'blank' with their value specified in 'fitting' process NOT the 'transform' process.
from sklearn.preprocessing import Imputer imp = Imputer(missing_values='NaN', strategy='mean', axis=0) # get 'nan' of each column #<- for col_1, nan=(1+5)/2=3; #<- for col_2, nan=(2+3+6)/3=3.66666667; imp.fit(np.array([[1,2],[np.nan, 3],[5,6]])) #< error, must same shape with ndarray passed into fit # after_imp = imp.transform(np.array([[np.nan, np.nan, 3],[4,5,np.nan],[7,8,9]])) # change 'nan' of each column #<- for col_1, nan=3; #<- for col_2, nan=3.66666667; after_imp = imp.transform(np.array([[np.nan, np.nan],[4,np.nan],[7,8]])) after_imp
array([[ 3. , 3.66666667], [ 4. , 3.66666667], [ 7. , 8. ]])
2.2 pandas
2.2.1 pd.get_dummies
# declaration of ~pandas.get_dummies()~ pandas.get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False, columns=None, sparse=False, drop_first=False, dtype=None)
Convert categorical variable into dummy/indicator variables
2.2.1.1 from string to one-hot encoding
import pandas as pd s = pd.Series(list('abcdaa')) gd = pd.get_dummies(s) gd
a b c d 0 1 0 0 0 1 0 1 0 0 2 0 0 1 0 3 0 0 0 1 4 1 0 0 0 5 1 0 0 0
2.2.1.2 from dict to one-hot encoding
df = pd.DataFrame({'A':['a','b','c'], 'B':['b','a','c'], 'C':[1,2,3]}) df
A B C 0 a b 1 1 b a 2 2 c c 3
gd = pd.get_dummies(df)
gd
C A_a A_b A_c B_a B_b B_c 0 1 1 0 0 0 1 0 1 2 0 1 0 1 0 0 2 3 0 0 1 0 0 1
gd = pd.get_dummies(df['A'], drop_first=True) gd
2.2.1.3 from boolean-like column to one-hot encoding
import pandas as pd df = pd.DataFrame({'A':['good','bad','bad','bad','good']}) col = pd.get_dummies(df['A']) col
bad good 0 0 1 1 1 0 2 1 0 3 1 0 4 0 1
Here we need to drop the fist column, to get a 0/1 representation:
- 1: good
- 2: bad
df = pd.DataFrame({'A':['good','bad','bad','bad','good']}) col = pd.get_dummies(df['A'], drop_first=True) df['A_dum'] = col df
A A_dum 0 good 1 1 bad 0 2 bad 0 3 bad 0 4 good 1
2.3 numpy
2.3.1 np.isnan()
# declaration of numpy.isnan numpy.isnan(x, # array_like, input array /, out=None, *, where=True, # array_like, optional, Values of True # indicate to calculate the ufunc at that # position, values of False indicate to leave # the value in the output alone. casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])
Test element-wise for NaN and return result as a boolean array.
2.3.2 np.any()
np.any(a,)
means np.is_there_True_exist_along_the_given_axis()
Test whether any array element along a given axis evaluates to True. Returns single boolean unless axis is not None
numpy.any(a, axis=None, # None: for all elements; # 0: along column; # 1: along row out=None, keepdims=<class 'numpy._globals._NoValue'>)
boolarr = np.arange(0, 9, 1).reshape((3,3)) print(boolarr) has_true_all = np.any(boolarr>5) print(has_true_all) has_true_row = np.any(boolarr>5, axis=0) print(has_true_row) has_true_col = np.any(boolarr>5, axis=1) print(has_true_col)