R Seethalakshmi, K Ravichandran, P Vasanth
cooperation, distributed agents, intelligence, multi-agents, random-walk
R Seethalakshmi, K Ravichandran, P Vasanth. Reliability Analysis Of Cooperating And Communicating Distributed Mobile Agents. The Internet Journal of Medical Informatics. 2005 Volume 2 Number 2.
Recently the concepts and technologies associated with Intelligent Agents have become popular and issues such as reliable analysis and design methodologies are increasingly evolved. Distributed Agents have received more attention, as they are used in accessing Distributed recourses in a network. In order to access the recourses efficiently a reliable Distributed Communication Network (DCN) is required. This paper aims at analyzing the reliability of DCN, which is formulated using Distributed Agents. Agents are autonomous entities and are called software robots and their interactions are cooperative. The Reliability of the DCN is a factor that may depend on the performance, availability, and strategy of Distributed Agent based system and it provides a failure free environment for communication. In this paper a Distributed Agent system is defined, which consists of multiple Agents, each of which accomplishes an independent task by taking a random walk, process the information at the chosen node and returns the result to the requestor. The Distributed Agents act as tokens, which travels over the network from node to node. It is necessary for the Agent to complete its walk successfully by a specified deadline t. The Agent needs a connected route during the period t to maintain mobility. This defines the concept of Distributed Agent tour or walk, which is used to estimate the Reliability of the Distributed Agents. Hence, the problem is reduced to the problem of successful completion of Agent's tour, based on the status of Distributed Communication Network. With such system, robustness of DCN is asserted & hence the increase in reliability.
Multi-Agent based system is the growing young area of research. An agent is a system that is situated in some environment, and that is capable of autonomous action in order to meet its design objectives [Wooldridge 2002]. Two basic properties of agents are: they are autonomous and react to the environment. Typical environments for the agents are dynamic, reliable, unpredictable, unreliable etc., Agents possess general properties which are reactive, proactive, flexible, robust and social which is widely used to achieve the goal for which it is set.
The advantages of the agents are they reduce coupling, they encapsulate invocation, methods are triggered externally and it doesn't provide control points to external entities. Agents are broadly categorized as Proactive agents and Reactive agents. The Reactive Agents respond to events whereas the Proactive Agents achieve goals. Both are very widely used in this application.
The Agent Environment
The agent environment consists of the agent, action, percepts, events, beliefs, goals and plans, which is depicted as in figure 1 below. An agent is an autonomous entity. A goal is something the agent is working on or towards achieving it. An event is a significant occurrence that the agent should respond to in some way. A belief is some aspect of the agents' knowledge or information about the environment, itself or other agents. A plan is a way of realizing a goal. Thus the agent environment is composed of these basic components .
Agent Execution Cycle
The concepts of actions, percepts, events, goals, plans and beliefs are related to each other via the execution cycle that implements the decision-making of the agent. The execution cycle describes how instances of these concepts interact as an agent executes. Figure 2 below shows the agent execution cycle clearly.
The Execution Cycle consists of the following steps:
Events are processed to update beliefs and generate immediate actions.
Goals are updated
Plans are selected from the plan library for achieving goals or handling events.
A plan step is executed in the next plan, yielding new events, goals, belief, changes or actions.
The process for handling an event or attempting to achieve a goal consists of determining the relevant plans from the library of plans, determine the subset of the relevant plans that is applicable, select one of the applicable plans and execute the selected plan.
Achievement of Goal by Agent
A goal-plan tree pictorially represents how the agent achieves the goal. The children of each goal are alternative ways of achieving that goal (OR) whereas the children of each plan are sub-goals that must all be achieved in order for the plan to succeed (AND). Let C be the number of plans that are applicable for each goal. Let S be the number of sub-goals per plan, except for the leaves of the goal-plan tree that do not have any sub goals. Finally let D be the depth of the tree, measured in terms of the number of goal levels. The number of ways by which the goal at the root of the goal-plan tree can be achieved is
Figure 3 below shows a typical goal- plan tree. Let G be a goal that has C applicable plans in which each plan can be executed in p possible ways. Since each plan is an alternative, the number of ways that G can be achieved is the sum p + p + ... + p that is C * p. It is formalized as ΔG(D) and ΔG(1) = C
Δc(D+1) = C*Δp(D+1)
where Δp(D) is the number of ways in which the plan at the root of a tree of Depth D can be executed. For Δp(D+1) in which the plan has S sub-goals,
ΔG(D+1) = ΔG(D)S Δ(G)(D+1) = C * Δ(P)(D+1) = C* ΔG (D)S
More generally, that for a goal-plan tree of Depth D the number of options is
CS(D-1) + ... + S(2) + S + 1
This can be simplified further as follows:
∑ = x(n-1) + ... + x2 + x + 1 x∑ = x(n) + ... + x2 + x ∑ = x(n) - 1 ÷ x - 1 CS(D-1) + ... + s2 + s + 1 = C((S(D-1)/(S-1)))
Modeling the Scenario
The Scenario comprises of a multi-agent system. Typically here four agents are considered whose goals are different. The four agents are Agent Creator, Agent Updater, File Searcher, Remote Desktop Capturer. Figure 4 clearly depicts the scenario in the form of a class diagram.
An Algorithm for computing the reliability
Here a Distributed computing network consists of various agents is formulated. Each agent is configured for the plans, goals, actions and beliefs. Figure 5 below shows a typical DCN
Algorithm for a Search Agent
The DCN as shown in figure 5 is formulated. Here the agents are capable of communicating and cooperating. The agents are dynamically created and hence the number of agents, their nature etc., are randomized. The algorithm for a Search Agent is as follows:
Dynamically create as many agents as is required.
Define the element to be searched. It can typically be any data. For example it can be file name, a text in a file or directory etc.,
Create a random walk of the agent so that agents can search in parallel the various paths.
Analyze the reliability of such a system using fuzzy logic which has a triangular membership functions.
Typically the reliability is a probability distributed between 0 to 1 decomposed into various conditions like non availability, exceptions/errors and perfect scenario.
Observe the results and record it and analyze it
Similarly the other tasks like remote desktop capture and remote updater are configured.
Results and Analysis
The following figures show the results and analysis of agents in DCN. Figure 6 depicts the scenario of search agent. Here the random walk is also displayed. Figure 7 shows the output of agents in the screen capture scenario. Figure 8 displays the reliability of the DCN for various network status and Figure 9 shows the multi-agent Analysis. The analysis concludes that the random walk, the search and update and screen capture agents in the DCN is highly reliable and the reliability and performance using agent based system is improved than conventional computing.
A multi-agent system is one that consists of a number of agents, which interact with one-another. To successfully interact, they will require the ability to cooperate, coordinate, and negotiate with each other, much as people do. Reliability of such communicating and cooperating agents depends on the dynamic environment and it is asserted that the reliability depends on the current status of the network. It is also found that the reliability can be improved if the network performance is enhanced.