About 97,800 results
Open links in new tab
  1. Backpropagation - Wikipedia

    Backpropagation computes the gradient for a fixed input–output pair , where the weights can vary. Each individual component of the gradient, can be computed by the chain rule; but doing this separately …

  2. Neural backpropagation - Wikipedia

    On average, a backpropagating spike loses about half its voltage after traveling nearly 500 micrometres. Backpropagation occurs actively in the neocortex, hippocampus, substantia nigra, and spinal cord, …

  3. Backpropagation - Simple English Wikipedia, the free encyclopedia

    The term backpropagation is short for "backward propagation of errors". It works especially well for feed forward neural networks (networks without any loops) and problems that require supervised learning.

  4. Backpropagation through time - Wikipedia

    Backpropagation through time (BPTT) is a gradient-based technique for training certain types of recurrent neural networks, such as Elman networks. The algorithm was independently derived by …

  5. Backpropagation through structure - Wikipedia

    Backpropagation through structure (BPTS) is a gradient-based technique for training recursive neural networks, proposed in a 1996 paper written by Christoph Goller and Andreas Küchler.

  6. Seppo Linnainmaa - Wikipedia

    Seppo Linnainmaa Seppo Ilmari Linnainmaa (born 28 September 1945) is a Finnish mathematician and computer scientist known for creating the modern version of backpropagation.

  7. Almeida–Pineda recurrent backpropagation - Wikipedia

    Almeida–Pineda recurrent backpropagation is an extension to the backpropagation algorithm that is applicable to recurrent neural networks. It is a type of supervised learning.

  8. 反向传播算法 - 维基百科,自由的百科全书

    反向传播 (英語: Backpropagation,意為 误差反向传播,缩写为 BP)是對多層 人工神经网络 進行 梯度下降 的算法,也就是用 链式法则 以网络每层的权重為變數计算 损失函数 的梯度,以更新权重來 …

  9. Digital back-propagation - Wikipedia

    Digital back-propagation (DBP) is a technique for compensating all fiber impairments in optical transmission systems. DBP is a sort of non-linearity compensation (NLC). DBP uses the back …

  10. Paul Werbos - Wikipedia

    He is best known for his 1974 dissertation, which first described the process of training artificial neural networks through backpropagation of errors. [1] He also was a pioneer of recurrent neural networks.