最开始学习正交原理是从实数学习的,当时觉得实数已经很好了,为什么要学习复杂的复数推导过程呢,随着慢慢的深入,发现复数才是更加通用的。这里就把学习中的推导笔记发上来,方便自己复习,也可以对其它正在学习这方面的朋友有所帮助。
这里有一个长度为M的滤波器,对于输入序列u(n),n=0,1,2...,滤波器的输出y(n)表示为
\[\begin{array}{*{20}{c}}
{y(n) = \sum\limits_{k = 0}^{M - 1} {w_k^*u(n - k)} }&{n = 0,1,2...}
\end{array}\]
这里y(n)是期望响应d(n)的一个估计值,则估计误差e(n)为
\[e(n) = d(n) - y(n)\]
为了优化滤波器的设计,我们使用均方误差做为代价函数。表示为
\[J = E[e(n){e^*}(n)] = E[|e(n){|^2}]\]
对于复数的输入,滤波器的系数也是复数,这里只把滤波器的第k个系数做复数展开
\[\begin{array}{*{20}{c}}
{{w_k} = {a_k} + j{b_k}}&{k = 0,1,2,...}
\end{array}\]
则代价函数J关于滤波器系数w向量的梯度向量的第k个元素为
\[\begin{array}{*{20}{c}}
{{\Delta _k}J = \frac{{\partial J}}{{\partial {a_k}}} + j\frac{{\partial J}}{{\partial {b_k}}}}&{k = 0,1,2,...}
\end{array}\]
展开可以得到:
\[\begin{array}{l}
{\Delta _k}J = \frac{{\partial J}}{{\partial {a_k}}} + j\frac{{\partial J}}{{\partial {b_k}}} = E\left[ {\frac{{\partial e(n)}}{{\partial {a_k}}}{e^*}(n) + \frac{{\partial e(n)}}{{\partial {b_k}}}j{e^*}(n) + \frac{{\partial {e^*}(n)}}{{\partial {a_k}}}e(n) + \frac{{\partial {e^*}(n)}}{{\partial {b_k}}}je(n)} \right]\\
\frac{{\partial e(n)}}{{\partial {a_k}}} = \frac{{\partial \left[ {d(n) - \sum\limits_{k = 0}^{M - 1} {w_k^*u(n - k)} } \right]}}{{\partial {a_k}}} = \frac{{\partial \left[ {d(n) - \sum\limits_{k = 0}^{M - 1} {({a_k} - j{b_k})u(n - k)} } \right]}}{{\partial {a_k}}} = - u(n - k)\\
\frac{{\partial e(n)}}{{\partial {b_k}}} = \frac{{\partial \left[ {d(n) - \sum\limits_{k = 0}^{M - 1} {w_k^*u(n - k)} } \right]}}{{\partial {b_k}}} = \frac{{\partial \left[ {d(n) - \sum\limits_{k = 0}^{M - 1} {({a_k} - j{b_k})u(n - k)} } \right]}}{{\partial {b_k}}} = ju(n - k)\\
\frac{{\partial {e^*}(n)}}{{\partial {a_k}}} = \frac{{\partial \left[ {{d^*}(n) - \sum\limits_{k = 0}^{M - 1} {{w_k}{u^*}(n - k)} } \right]}}{{\partial {a_k}}} = \frac{{\partial \left[ {{d^*}(n) - \sum\limits_{k = 0}^{M - 1} {({a_k} + j{b_k}){u^*}(n - k)} } \right]}}{{\partial {a_k}}} = - u*(n - k)\\
\frac{{\partial {e^*}(n)}}{{\partial {b_k}}} = \frac{{\partial \left[ {{d^*}(n) - \sum\limits_{k = 0}^{M - 1} {{w_k}{u^*}(n - k)} } \right]}}{{\partial {b_k}}} = \frac{{\partial \left[ {{d^*}(n) - \sum\limits_{k = 0}^{M - 1} {({a_k} + j{b_k}){u^*}(n - k)} } \right]}}{{\partial {b_k}}} = - ju*(n - k)
\end{array}\]
将上面的4个偏微分代入梯度向量的第k个元素,整理可得
\[\begin{array}{l}
{\Delta _k}J = E\left[ { - u(n - k){e^*}(n) + ju(n - k)j{e^*}(n) - u*(n - k)e(n) - ju*(n - k)je(n)} \right]\\
= E\left[ { - u(n - k){e^*}(n) - u(n - k){e^*}(n) - u*(n - k)e(n) + u*(n - k)e(n)} \right]\\
= - 2E\left[ {u(n - k){e^*}(n)} \right]
\end{array}\]
当梯度等于0,就是滤波器均方误差最小(最优)的条件,这个条件可以表示为
\[\begin{array}{*{20}{c}}
{E\left[ {u(n - k){e^*}(n)} \right] = 0}&{k = 0,1,2,...}
\end{array}\]
这就是正交原理的数学描述,它的意思是:当估计误差与输入数据正交时,滤波器工作于均方误差准则下的最优状态。
原文:http://www.cnblogs.com/icoolmedia/p/orthogonality_principle.html