類神經網路

類神經網路

前言

感知器

LMS 學習法

反傳遞網路

反傳遞學習法

自組織網路

自組織學習法

相關書籍

人工智慧

自然語言

機器翻譯

邏輯推論

計算理論

相關資源

相關網站

相關書籍

參考文獻

統計資訊

最新修改

網頁標記

訊息

相關網站

參考文獻

最新修改

簡體版

English

MP 模型

MP Model (McCulloch and Pitts)
si = f(Σ(Wij*sj) - ci)

Hebb's 法則

Neural Weight Learning Rule (Hebb)
Rule : diff(Wij) = a * si * sj a>0

感知器

Perceptron (Rosenblatt)
Learning algorithm :

   algorithm Learning
     initialize Wi(0) with small value,  1<=i<=n
     repeat
       input sample (x1, x2, ? xn, 1) and it's desire output d
       Wi(t+1) = Wi(t) + speed * [d-Y(t)]*xi,  1<=i<=n+1, 0 < speed <= 1
     until converge
   end algorithm

線性元件

Adaline, Adaptive Linear Element (widrow and Hoff)

反傳遞演算法

Back Propagation Algorithm for Multilayer Perceptron
Learning algorithm :

    algorithm Back Propagation
      initialize W(0) with small value
      repeat
        Choose next training pair (x, d) and let the 0th layer be u0 = x
        Feed Forward
        Compute Gradient
        Update Weights 
      until converge
    end algorithm
    function Feed Forward
      for layer = 1 to L do
        for node = 1 to N[layer] do 
          u[layer, node] = f(Σ( W[layer, node, i] * u[layer, node] ))
    end function
    function Compute Gradient
      for layer = L to 1 do 
        for node = 1 to N[layer]
          if layer = L then 
            e[L, node] = u[L, node] - d[node]
      else
        e[layer, node] = Σ( e[layer+1, m] * u[layer+1, m] *
                          (1-u[layer+1, m]) * w[layer+1,node,m] )
    end for
        for all weight in layer do
      g[layer, j, i] = e[layer,j] * u[layer,j] * (1-u[layer,j]) * u[layer-1, i];
    end for
      end for
    end function

Lyapunov 定理

Lyapunov theorem
for continuous dynamic system X" = V(X, t) = X(t)'*P*X(t)
if V(X, t) is positive definite // ie : V(X, t) >= a(|X|) > 0
and V"(X, t) is seminegative definite // ie : V"(X, t) <= -r(|X|) < 0
and V(X, t) < b(|X|) // b(|X|) is a nondecreasing function
and a(|X|) -> infinite
then
lim locus(t; X0, t0) = Xe , for any X0

Kolmogorov 定理

Kolmogorov theorem
any continuous function f(e1, e2, …, em) -> r1, r2, …, rn
where 0<=ei<=1 and ri is real number can be implement by
a network with 3 layer where
layer1 has m node
layer2 has 2m+1 node
layer3 has n node

反傳遞收斂定理

Back Propatgation Convergence Theorem
{(x1, y1),(x2, y2),…,(xL, yL)} where yi = f(xi) , xi in Rm and yi in Rn
we can build a 5 layer network BP with 2m+2n+L node to optimal approximate
function f(x)

Hopfield 網路

Hopfield Network
Learning algorithm :

  algorithm Hopfield
initialize Weight --> Wij = Σ( (2Vi(m)-1) * (2Vj(m)-1) )   i<>j  
// V is training vector
             = 0i=j
    initialize ai(0) = Xi  // X is input vector, Xi is 0 or 1
    repeat
      aj(k+1) = fn[ Σ(Wij*ai(k)) ], 1<=j<=M
    until converge
  end algorithm
  ui(k+1) = Σ( Wij*aj(k) + ci)         u is net input
  ai(k+1) = 1     if ui(k+1) > 0         a is the thresholding function
      = 0     if ui(k+1) < 0
      = ai(k) if ui(k+1) = 0

Leypunov 能量函數

Leypunov Energy Function
E = -1/2*Σ( Wij*ai*aj ) - Σ( ci*ai )

遺傳演算法

Genetic Algorithm
Algorithm :

  Algorithm Genetic
    random initialize population(0)
    repeat
      select chromosome1, chromosome2 from population

      repeat
         (new chromosome1, new chromosome2) = crossover(chromosome1, chromosome2)
         mutate(new_chromosome1, new_chromosome2)
         add new (chromosome1, new chromosome2) into new_population
      until full new_population

    until convergence
  End Algorithm
Analysis :

Facebook

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License