本文分析了基于非独立同分布样本的最小二乘系数正则化算法的误差。全文的框架不同于以往的经典核学习方法。核函数仅仅满足连续性和一致有界性;我们进行基于输出样本满足广义矩理论的误差分析,而不再考虑标准的输出有界假设。最后通过利用积分算子技术得到了满意的与容量无关的误差界和学习率。 In this paper, we study the error analysis of indefinite kernel network with coefficient regulariza-tion for non-iid sampling. The framework under investigation is different from classical kernel learning. The kernel function satisfies only the continuity and uniform boundedness; the standard bound assumption for output data is abandoned and we carry out the error analysis with output sample values satisfying a generalized moment hypothesis. Satisfactory error bounds and learning rates independent of capacity are derived by the techniques of integral operator for this learning algorithm.
Yanfang TaoBiqin SongLuoqing Li