|
1 | 1 | {
|
2 | 2 | "cells": [
|
| 3 | + { |
| 4 | + "cell_type": "markdown", |
| 5 | + "metadata": {}, |
| 6 | + "source": [ |
| 7 | + "# 5-5损失函数losses\n", |
| 8 | + "\n", |
| 9 | + "一般来说,监督学习的目标函数由损失函数和正则化项组成。(Objective = Loss + Regularization)\n", |
| 10 | + "\n", |
| 11 | + "对于keras模型,目标函数中的正则化项一般在各层中指定,例如使用Dense的 kernel_regularizer 和 bias_regularizer等参数指定权重使用l1或者l2正则化项,此外还可以用kernel_constraint 和 bias_constraint等参数约束权重的取值范围,这也是一种正则化手段。\n", |
| 12 | + "\n", |
| 13 | + "损失函数在模型编译时候指定。对于回归模型,通常使用的损失函数是平方损失函数 mean_squared_error。\n", |
| 14 | + "\n", |
| 15 | + "对于二分类模型,通常使用的是二元交叉熵损失函数 binary_crossentropy。\n", |
| 16 | + "\n", |
| 17 | + "对于多分类模型,如果label是one-hot编码的,则使用类别交叉熵损失函数 categorical_crossentropy。如果label是类别序号编码的,则需要使用稀疏类别交叉熵损失函数 sparse_categorical_crossentropy。\n", |
| 18 | + "\n", |
| 19 | + "如果有需要,也可以自定义损失函数,自定义损失函数需要接收两个张量y_true,y_pred作为输入参数,并输出一个标量作为损失函数值。" |
| 20 | + ] |
| 21 | + }, |
| 22 | + { |
| 23 | + "cell_type": "code", |
| 24 | + "execution_count": 2, |
| 25 | + "metadata": {}, |
| 26 | + "outputs": [], |
| 27 | + "source": [ |
| 28 | + "import numpy as np\n", |
| 29 | + "import pandas as pd\n", |
| 30 | + "import tensorflow as tf\n", |
| 31 | + "from tensorflow.keras import layers,models,losses,regularizers,constraints" |
| 32 | + ] |
| 33 | + }, |
| 34 | + { |
| 35 | + "cell_type": "markdown", |
| 36 | + "metadata": {}, |
| 37 | + "source": [ |
| 38 | + "### 一、损失函数和正则化项" |
| 39 | + ] |
| 40 | + }, |
| 41 | + { |
| 42 | + "cell_type": "code", |
| 43 | + "execution_count": 4, |
| 44 | + "metadata": {}, |
| 45 | + "outputs": [ |
| 46 | + { |
| 47 | + "name": "stdout", |
| 48 | + "output_type": "stream", |
| 49 | + "text": [ |
| 50 | + "Model: \"sequential\"\n", |
| 51 | + "_________________________________________________________________\n", |
| 52 | + "Layer (type) Output Shape Param # \n", |
| 53 | + "=================================================================\n", |
| 54 | + "dense (Dense) (None, 64) 4160 \n", |
| 55 | + "_________________________________________________________________\n", |
| 56 | + "dense_1 (Dense) (None, 10) 650 \n", |
| 57 | + "=================================================================\n", |
| 58 | + "Total params: 4,810\n", |
| 59 | + "Trainable params: 4,810\n", |
| 60 | + "Non-trainable params: 0\n", |
| 61 | + "_________________________________________________________________\n" |
| 62 | + ] |
| 63 | + } |
| 64 | + ], |
| 65 | + "source": [ |
| 66 | + "tf.keras.backend.clear_session()\n", |
| 67 | + "\n", |
| 68 | + "model = models.Sequential()\n", |
| 69 | + "model.add(layers.Dense(64, input_dim=64,\n", |
| 70 | + " kernel_regularizer=regularizers.l2(0.01), \n", |
| 71 | + " activity_regularizer=regularizers.l1(0.01),\n", |
| 72 | + " kernel_constraint = constraints.MaxNorm(max_value=2, axis=0))) \n", |
| 73 | + "model.add(layers.Dense(10,\n", |
| 74 | + " kernel_regularizer=regularizers.l1_l2(0.01,0.01),activation = \"sigmoid\"))\n", |
| 75 | + "model.compile(optimizer = \"rmsprop\",\n", |
| 76 | + " loss = \"sparse_categorical_crossentropy\",metrics = [\"AUC\"])\n", |
| 77 | + "model.summary()" |
| 78 | + ] |
| 79 | + }, |
| 80 | + { |
| 81 | + "cell_type": "markdown", |
| 82 | + "metadata": {}, |
| 83 | + "source": [ |
| 84 | + "### 二、内置损失函数" |
| 85 | + ] |
| 86 | + }, |
| 87 | + { |
| 88 | + "cell_type": "markdown", |
| 89 | + "metadata": {}, |
| 90 | + "source": [ |
| 91 | + "内置的损失函数一般有类的实现和函数的实现两种形式。\n", |
| 92 | + "\n", |
| 93 | + "如:CategoricalCrossentropy 和 categorical_crossentropy 都是类别交叉熵损失函数,前者是类的实现形式,后者是函数的实现形式。\n", |
| 94 | + "\n", |
| 95 | + "常用的一些内置损失函数说明如下。\n", |
| 96 | + "\n", |
| 97 | + "* mean_squared_error(平方差误差损失,用于回归,简写为 mse, 类实现形式为 MeanSquaredError 和 MSE)\n", |
| 98 | + "\n", |
| 99 | + "* mean_absolute_error (绝对值误差损失,用于回归,简写为 mae, 类实现形式为 MeanAbsoluteError 和 MAE)\n", |
| 100 | + "\n", |
| 101 | + "* mean_absolute_percentage_error (平均百分比误差损失,用于回归,简写为 mape, 类实现形式为 MeanAbsolutePercentageError 和 MAPE)\n", |
| 102 | + "\n", |
| 103 | + "* Huber(Huber损失,只有类实现形式,用于回归,介于mse和mae之间,对异常值比较鲁棒,相对mse有一定的优势)\n", |
| 104 | + "\n", |
| 105 | + "* binary_crossentropy(二元交叉熵,用于二分类,类实现形式为 BinaryCrossentropy)\n", |
| 106 | + "\n", |
| 107 | + "* categorical_crossentropy(类别交叉熵,用于多分类,要求label为onehot编码,类实现形式为 CategoricalCrossentropy)\n", |
| 108 | + "\n", |
| 109 | + "* sparse_categorical_crossentropy(稀疏类别交叉熵,用于多分类,要求label为序号编码形式,类实现形式为 SparseCategoricalCrossentropy)\n", |
| 110 | + "\n", |
| 111 | + "* hinge(合页损失函数,用于二分类,最著名的应用是作为支持向量机SVM的损失函数,类实现形式为 Hinge)\n", |
| 112 | + "\n", |
| 113 | + "* kld(相对熵损失,也叫KL散度,常用于最大期望算法EM的损失函数,两个概率分布差异的一种信息度量。类实现形式为 KLDivergence 或 KLD)\n", |
| 114 | + "\n", |
| 115 | + "* cosine_similarity(余弦相似度,可用于多分类,类实现形式为 CosineSimilarity)" |
| 116 | + ] |
| 117 | + }, |
| 118 | + { |
| 119 | + "cell_type": "code", |
| 120 | + "execution_count": null, |
| 121 | + "metadata": {}, |
| 122 | + "outputs": [], |
| 123 | + "source": [] |
| 124 | + }, |
| 125 | + { |
| 126 | + "cell_type": "markdown", |
| 127 | + "metadata": {}, |
| 128 | + "source": [ |
| 129 | + "### 三、自定义损失函数" |
| 130 | + ] |
| 131 | + }, |
| 132 | + { |
| 133 | + "cell_type": "markdown", |
| 134 | + "metadata": {}, |
| 135 | + "source": [ |
| 136 | + "\n", |
| 137 | + "自定义损失函数接收两个张量y_true,y_pred作为输入参数,并输出一个标量作为损失函数值。\n", |
| 138 | + "\n", |
| 139 | + "也可以对tf.keras.losses.Loss进行子类化,重写call方法实现损失的计算逻辑,从而得到损失函数的类的实现。\n", |
| 140 | + "\n", |
| 141 | + "下面是一个Focal Loss的自定义实现示范。Focal Loss是一种对binary_crossentropy的改进损失函数形式。\n", |
| 142 | + "\n", |
| 143 | + "在类别不平衡和存在难以训练样本的情形下相对于二元交叉熵能够取得更好的效果。\n", |
| 144 | + "\n", |
| 145 | + "详见《如何评价Kaiming的Focal Loss for Dense Object Detection?》\n", |
| 146 | + "\n", |
| 147 | + "https://www.zhihu.com/question/63581984" |
| 148 | + ] |
| 149 | + }, |
| 150 | + { |
| 151 | + "cell_type": "code", |
| 152 | + "execution_count": 5, |
| 153 | + "metadata": {}, |
| 154 | + "outputs": [], |
| 155 | + "source": [ |
| 156 | + "def focal_loss(gamma=2., alpha=0.25):\n", |
| 157 | + " \n", |
| 158 | + " def focal_loss_fixed(y_true, y_pred):\n", |
| 159 | + " pt_1 = tf.where(tf.equal(y_true, 1), y_pred, tf.ones_like(y_pred))\n", |
| 160 | + " pt_0 = tf.where(tf.equal(y_true, 0), y_pred, tf.zeros_like(y_pred))\n", |
| 161 | + " loss = -tf.sum(alpha * tf.pow(1. - pt_1, gamma) * tf.log(1e-07+pt_1)) \\\n", |
| 162 | + " -tf.sum((1-alpha) * tf.pow( pt_0, gamma) * tf.log(1. - pt_0 + 1e-07))\n", |
| 163 | + " return loss\n", |
| 164 | + " return focal_loss_fixed" |
| 165 | + ] |
| 166 | + }, |
| 167 | + { |
| 168 | + "cell_type": "code", |
| 169 | + "execution_count": 6, |
| 170 | + "metadata": {}, |
| 171 | + "outputs": [], |
| 172 | + "source": [ |
| 173 | + "class FocalLoss(losses.Loss):\n", |
| 174 | + " \n", |
| 175 | + " def __init__(self,gamma=2.0,alpha=0.25):\n", |
| 176 | + " self.gamma = gamma\n", |
| 177 | + " self.alpha = alpha\n", |
| 178 | + "\n", |
| 179 | + " def call(self,y_true,y_pred):\n", |
| 180 | + " \n", |
| 181 | + " pt_1 = tf.where(tf.equal(y_true, 1), y_pred, tf.ones_like(y_pred))\n", |
| 182 | + " pt_0 = tf.where(tf.equal(y_true, 0), y_pred, tf.zeros_like(y_pred))\n", |
| 183 | + " loss = -tf.sum(self.alpha * tf.pow(1. - pt_1, self.gamma) * tf.log(1e-07+pt_1)) \\\n", |
| 184 | + " -tf.sum((1-self.alpha) * tf.pow( pt_0, self.gamma) * tf.log(1. - pt_0 + 1e-07))\n", |
| 185 | + " return loss" |
| 186 | + ] |
| 187 | + }, |
3 | 188 | {
|
4 | 189 | "cell_type": "code",
|
5 | 190 | "execution_count": null,
|
|
0 commit comments