|
Class methods defined here:
- accuracy(y_true, y_pred)
- Calculates the accuracy score between the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted values.
Returns:
accuracy: (float) - The accuracy score.
- classification_report(y_true, y_pred)
- Generates a classification report for the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted values.
Returns:
report: (dict) - The classification report.
- confusion_matrix(y_true, y_pred)
- Calculates the confusion matrix between the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted values.
Returns:
cm: (np.ndarray) - The confusion matrix.
- f1_score(y_true, y_pred)
- Calculates the F1 score between the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted values.
Returns:
f1_score: (float) - The F1 score.
- log_loss(y_true, y_pred)
- Calculates the log loss between the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted probabilities.
Returns:
log_loss: (float) - The log loss.
- mean_absolute_error(y_true, y_pred)
- Calculates the mean absolute error between the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted values.
Returns:
mae: (float) - The mean absolute error.
- mean_absolute_percentage_error(y_true, y_pred)
- Calculates the mean absolute percentage error between the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted values.
Returns:
mape: (float) - The mean absolute percentage error as a decimal. Returns np.nan if y_true is all zeros.
- mean_percentage_error(y_true, y_pred)
- Calculates the mean percentage error between the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted values.
Returns:
mpe: (float) - The mean percentage error.
- mean_squared_error(y_true, y_pred)
- Calculates the mean squared error between the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted values.
Returns:
mse: (float) - The mean squared error.
- precision(y_true, y_pred)
- Calculates the precision score between the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted values.
Returns:
precision: (float) - The precision score.
- r_squared(y_true, y_pred)
- Calculates the R-squared score between the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted values.
Returns:
r_squared: (float) - The R-squared score.
- recall(y_true, y_pred)
- Calculates the recall score between the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted values.
Returns:
recall: (float) - The recall score.
- root_mean_squared_error(y_true, y_pred)
- Calculates the root mean squared error between the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted values.
Returns:
rmse: (float) - The root mean squared error.
- show_classification_report(y_true, y_pred)
- Generates and displays a classification report for the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted values.
Returns:
report: (dict) - The classification report.
- show_confusion_matrix(y_true, y_pred)
- Calculates and displays the confusion matrix between the true and predicted values.
Args:
y_true: (np.ndarray) - The true values.
y_pred: (np.ndarray) - The predicted values.
Returns:
cm: (np.ndarray) - The confusion matrix.
Data descriptors defined here:
- __dict__
- dictionary for instance variables
- __weakref__
- list of weak references to the object
|