自然語言前言簡介歷史理論篇知識表達語法理論語意理論語用理論方法篇規則比對機率統計神經網路應用篇語料建構全文檢索自動分類自動摘要機器翻譯問答系統中文處理程式篇交談程式英漢翻譯維基語料搜尋引擎相關資源語料辭典程式工具相關網站相關文獻網頁列表統計資訊最新修改訊息相關網站參考文獻最新修改簡體版English |
論文檔案下載:AgentNet.doc Chung-Chen Chen Abstract Back propagation neural-network successfully resolves many problems in computer science. However, it is not such a good model for symbolic problems such as natural language processing (NLP), because the neuron cannot represent symbol intuitively. In this paper, we proposed an agent-network based on the back-propagation algorithm to resolve not only problems of weight adjustment, but also problem for symbolic computation. Keywords : Agent, Neural Network, Back-propagation, Natural Language 1. IntroductionAgent becomes a fancy idea in computer science recently without formal definition. Informally speaking, an agent is an object that perceiving its environment through sensors and acting upon that environment through effector [1]. Several question arise from the informal definition, including the terms “environment”, “sensor”, and “effector”. In this paper, we propose a learning model for a network of agents, and define the “envionment”, “sensor” and “effector” in this model. The learning model is similar to the back-propagation neural network (BPNN), so we call it the back-propagation agent network (BPAN). The difference between BPAN and BPNN is list as following. 2. BackgroundAlthough BPAN is similar to BPNN, but the idea comes from the LMS (least mean squares) learning approach invented by Widrow and Hoff in 1960[2]. The LMS approach is a classic method for machine learning and categorized into reinforcement learning approach in several books [1][3]. The goal of LMS approach is to learn a set of weights for utility function to minimize the error E between the training values (Vtrain) and the values predicted by the hypothesis (Vpredict). However, for many problems, such as machine control and chess playing, programs do not get response from environment in every step, so that no training value available for the program. Widrow and Hoff proposed a smart approach to use the average utility of successor state as the expect value (Vexpect) when there are no responses at the time now. The following formula shows function Vexpect and the error function. Vexpect(e) = s=successor(e) Vpredict(s) Widrow and Hoff proposed an algorithm to learn weights for minimizing the error function showed in Figure 1. Algorithm LMS-Learning Initialize weights randomly. Algorithm Back-Propagation For each training example (e,v) If we read the algorithm in Figure 1 and Figure 2 carefully, we may found that both of them may be abstract into the following algorithm For each example e 3. The Model of Back-Propagation Agent NetworkFor each example e A BPAN is a network with shared variables. Agents in the network “sense” the value of variables, adjust the hypothesis and modify the value of variables as it’s “effect”. Shared variables in the network may be categorized into two groups – prediction variables and expectation variables. and define a set of functions for each agent in this model. 4. Examples for Back-Propagation Agent Network4.1 Reinforcement Learning5. Conclusion and Future Works.Reference1. Russell, S., & Norvig, P. (1995) Artificial Intelligence : A modern approach. Englewood Cliffs, NJ : Prentice-Hall. |
反傳遞代理人網路 -- Back-Propagation Agent Network
page revision: 1, last edited: 24 Aug 2010 12:27






Post preview:
Close preview