From 9fbeeea46e1d00547090d5bd7236fba8854b0869 Mon Sep 17 00:00:00 2001 From: Viraj Navkal Date: Tue, 28 Nov 2017 13:32:35 -0800 Subject: [PATCH] Small fixes to notation in documentation (#2903) * make every theta lowercase * use uniform font and capitalization for function name --- doc/model.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/model.md b/doc/model.md index 2bbb1b541..58f242800 100644 --- a/doc/model.md +++ b/doc/model.md @@ -31,7 +31,7 @@ to measure the performance of the model given a certain set of parameters. A very important fact about objective functions is they ***must always*** contain two parts: training loss and regularization. ```math -Obj(\Theta) = L(\theta) + \Omega(\Theta) +\text{obj}(\theta) = L(\theta) + \Omega(\theta) ``` where ``$ L $`` is the training loss function, and ``$ \Omega $`` is the regularization term. The training loss measures how *predictive* our model is on training data. @@ -188,7 +188,7 @@ By defining it formally, we can get a better idea of what we are learning, and y Here is the magical part of the derivation. After reformalizing the tree model, we can write the objective value with the ``$ t$``-th tree as: ```math -Obj^{(t)} &\approx \sum_{i=1}^n [g_i w_{q(x_i)} + \frac{1}{2} h_i w_{q(x_i)}^2] + \gamma T + \frac{1}{2}\lambda \sum_{j=1}^T w_j^2\\ +\text{obj}^{(t)} &\approx \sum_{i=1}^n [g_i w_{q(x_i)} + \frac{1}{2} h_i w_{q(x_i)}^2] + \gamma T + \frac{1}{2}\lambda \sum_{j=1}^T w_j^2\\ &= \sum^T_{j=1} [(\sum_{i\in I_j} g_i) w_j + \frac{1}{2} (\sum_{i\in I_j} h_i + \lambda) w_j^2 ] + \gamma T ```