Research and Advances
Computing Applications

A Multiagent System For U.S. Defense Research Contracting

The Multi-Agent Contracting System incorporates both learning and a natural language interface to automate contracting officer query resolution during the process of defense contract acquisition.
  1. Introduction
  2. System Architecture
  3. Learning in MACS
  4. Natural Language Processing
  5. System Operation
  6. Conclusion
  7. References
  8. Authors
  9. Figures

In U.S. defense research contracting, contracting officers must address and complete administrative details for contract acquisition. A primary source of support comes from the Defense Acquisition Deskbook ( and its FAQ lists. However, contracting officers typically ask specialized questions that require human experts for resolution. Automation is one way to reduce the burden on the human resources of contracting agencies during the contract acquisition process. Software agents provide an approach to automation that has proved successful in several areas (such as buying and selling online, coordinating software development projects, monitoring financial transactions, and retrieving information). The combination of past work on automation and the ability of agents to search vast repositories of information [7] suggests that an agent technique is promising for contract acquisition.

The Multi-Agent Contracting System (MACS) has been developed to automate responses to contracting officers’ queries during the pre-award phase of the contract acquisition process. MACS agents are modeled after the expertise and activities required of contracting officers in defense contracting.

The key issues affecting system performance, and in turn influencing the acceptance of many agent systems, are their learning ability and user-friendly interfaces. Complete knowledge cannot be encoded into an intelligent agent system a priori; thus, systems must be able to learn and apply knowledge gained from experience to improve their performance [5]. To improve system performance and increase user acceptance, MACS incorporates both a learning capability and a natural language (NL) interface. These features contribute to the continual improvement of MACS; the NL interface enhances learning, and learning improves the NL interface via a positive feedback loop. This is particularly important in light of parser limitations [4] and the fact that user input may be interpreted inappropriately.

Back to Top

System Architecture

MACS was developed in order to evaluate the potential of learning and an NL interface on system performance. The MACS architecture implements a typical three-tiered brokered architecture containing nine agents: User, Facilitator, Natural Language Process (NLP), Bayesian Learning (BL), and five specialty agents (SAs). The User agent is the highest level, interfacing with users through keyword searches or NL queries. The Facilitator agent is responsible for interfacing between the User agent and each of the other agents in MACS while also coordinating agent activities. The seven remaining agents interface with the Facilitator agent and are responsible for resolving user queries.

For keyword queries, the Facilitator agent forwards user queries from the User agent directly to the BL agent. For NL queries, the Facilitator agent forwards a user query to the NLP agent for parsing. The parsed message is returned to the Facilitator agent and then forwarded to the BL agent. In both cases, the BL agent creates an action plan that is issued to the Facilitator agent for completion. The action plan determines which SA(s) should be contacted to resolve a query. The Facilitator agent completes the plan by performing the necessary communication among the agents. This communication leads to solutions being sent from an SA to the Facilitator agent and then to the User agent. The Facilitator also forwards information regarding which agents responded to which queries to the BL agent, so it can learn response plans for similar queries in the future.

The five SAs in MACS relate to the pre-award phase of a contract and include mutually exclusive areas of expertise: Forms, Justification, Evaluation, Synopsis, and Contracts. The Forms agent identifies the forms needed to complete procurement request packages. The Justification agent indicates when justification and approval are required to complete procurement requests. The Evaluation agent provides guidelines for proposal evaluation. The Synopsis agent identifies types of synopses for given procurement requests. Lastly, the Contract agent identifies the type and nature of contracts.

Because the domain knowledge of the SAs is mutually exclusive, direct coordination among SAs is not required. Instead, the Facilitator agent coordinates the SAs. The learning capability allows the BL agent to learn which SA(s) should receive incoming messages in order to minimize the number of communications required among agents. Information learned by the BL agent is passed to the Facilitator agent for efficient query resolution.

Implications of the MACS architecture include:

  • The brokered architecture saves on computational resources because messages are not broadcast to all agents. For the nine-agent prototype system (with all agents residing on the same computer), brokering may not be much better than broadcasting. But as the number of agents increases (such as when scanning agents are added to acquire new contracting knowledge, SAs to store and apply new knowledge, and agents to handle the award and post-award phases of contracting) and the system is distributed across networks, gains in computation time are more pronounced;
  • The minimal communication among SAs makes MACS amenable to changes and upgrades over time. Individual agents are easily removed, added, or changed, affecting only the link between the changed agent and the Facilitator agent;
  • MACS can handle new knowledge domain areas by creating a new agent, coding it with new knowledge, and plugging it into the system; scanning agents, which search for new knowledge, can also be added to MACS for the automated updating of knowledge in the SAs; and
  • Communication among agents can be designed into MACS at any time in the future by expanding the capabilities of the agents to communicate with more than just the Facilitator agent.

Back to Top

Learning in MACS

Because intelligent agent performance can be sensitive to initial knowledge distribution among agents in a multiagent system [2], systems built on a knowledge base tend to degrade significantly as the limits of knowledge are reached [5]. Thus, we focus on learning for enhanced performance, especially in combination with an NL interface. Rather than learning the preferences of other agents, MACS learns the abilities of other agents, as well as user objectives, to enhance its own performance andefficiency.

Learning occurs in two parts of MACS: Bayesian learning applied in the BL agent and reinforcement learning applied in the NLP agent. Bayesian learning applies a Bayesian model for learning which of the SAs should receive incoming queries, in the following steps:

1. Parsed output from NLP agent sent to BL agent. For each SA:

1.1. Calculate the percentage of time each keyword appears in prior queries;

1.2. Calculate the likelihood that a new query, q, corresponds to the domain knowledge of that SA by multiplying percentages calculated in 1.1 that correspond to q;

1.3. Apply the Bayesian formula;

1.3.1. Multiply the likelihood that q should be sent to a particular SA given no prior queries (prior probability) by the result from 1.2;

1.3.2. Sum all calculations from 1.3.1;

1.3.3. Divide each individual result from 1.3.1 by 1.3.2;

1.4. Divide calculation for each SA from 1.3.3 by prior probability;

1.5. Sort results in descending order;

1.6. Rank SA according to result from 1.4 (highest = rank 1);

1.7. If SA with rank = 2 is within 0.001% of SA with rank = 1;

1.7.1. Then Send q to all SAs with rank = 1 or rank = 2;

1.7.2. Else Send q to all SAs with rank = 1; and

2. Update prior probabilities (learning) with result in 1.4.

The updated probabilities are used as a basis for routing user queries to SAs in the future.

Reinforcement learning [6] involves agents acquiring new knowledge through feedback from previous experience and the environment. A reinforcement signal results from an agent’s actions, and the agent learns to improve its performance based on these signals [1]. MACS learns what a user is querying based on similar past questions. When a user inputs a new NL query, the query either does or does not parse. If it parses, the Facilitator agent forwards the query to the BL agent to determine which SA(s) should receive the query. If it does not parse, reinforcement learning is invoked to help resolve the query.

First, a cache is searched to determine whether the NLP agent has already learned how to respond to the unparsable query. If so, the NLP agent applies its previous learning and exchanges the unparsable query for a rephrased, parsable one that asks the same question. Feedback can be provided to identify parsed queries that do not adequately represent the current query. In this case, the cached query is not used. If the unparsable query is not present in the cache, the user is asked to rephrase the query. Once rephrased, the original and the new queries are sent to the NLP agent. If the rephrased question parses, it then serves as feedback for the future. In this case, MACS learns from user reinforcement and is then able to resolve the original query in future user sessions without requiring feedback from the user, thus reducing the burden on users of having to reword unparsable queries. A particular user may thus be overburdened by the need to rephrase a query an unreasonable number of times. However, the robustness of the NLP agent for handling many grammatical forms suggests this situation would be the exception rather than the norm [8].

Back to Top

Natural Language Processing

The NLP agent in MACS is an ATTAIN parser—a package of NL Open Agent Architecture (OAA) agents providing parsing and translation of English sentences into the Interagent Communication Language (ICL). ICL expressions are internal OAA representations of the NL query that agents can act on. These expressions are sent to the SA(s) for query resolution.

ATTAIN allows for both active and passive voice constructions, extensive use of modals (should, could, would), and long verb predicates (long lists of noun phrases and prepositional phrases after the verb). These features enhance the MACS ability to handle the types of queries encountered in the contracting domain, compared to previous versions of MACS. Examples of the types of questions that might be parsed by the upgraded MACS system include: Which contract type do I submit if my proposal deals with university research? How do I determine the scoring of evaluation criteria for competitive solicitations?

However, ATTAIN is unable to handle conditional phrases to the extent needed by MACS. It also has problems handling numbers that function as modifiers (such as “5 hours”) and cannot use certain special characters (such as & and $). These problems are overcome by modifying queries into parsable phrases, rendering multiterm tokens that include numbers into single-term tokens by using underscores between the terms (such as DD_Form_1498) and expanding & and $ to “and” and “dollars,” respectively.

Back to Top

System Operation

A series of sample screens illustrate how MACS works [3]. The user is presented with the NLP submission form—a Web page that manages a session. The user, U, submits an unparsable query, say, “What justification type do I need if I am working with a sole source contract?” MACS asks the user to rephrase and categorize (see Figure 1). The scenario might go like this:

  • Ask a question. The user types a query for submission to MACS.
  • Category. The user categorizes unparsable queries according to SA expertise;
  • Pick list of questions. Users can see if their query was previously resolved by reading through a list of previously answered queries; and
  • This session’s sentence(s). Unparsable queries are displayed, and user feedback is provided.

User U now submits a sentence that is answerable: “What do I include in a sole-source justification?” The answer is presented to U, and reinforcement learning occurs at the bottom of the screen:

  • Query 1 is automatically selected because it is the query that was successfully resolved; and
  • Queries 2 and 3 are previous, unparsable queries; the user identified 2 as being synonymous with 1.

A new user, V, submits the same query (see sentence 2 in Figure 2) that was previously unparsable. MACS has learned to answer this query (see Figure 3). The user is notified that the query was replaced with its synonymous counterpart.

This scenario highlights several MACS advantages:

  • Because it’s modeled after the knowledge domain areas covered by human experts in defense contracting, MACS is capable of handling a range of queries that pertain to contracting. In MACS, users receive immediate answers rather than waiting for human experts to read, research, and reply to queries;
  • By incorporating learning, MACS responds efficiently to users. Over time, the questions asked by users may change as the contracting environment changes. When this occurs, reinforcement learning helps maintain and upgrade the interface and the grammar accommodated by MACS. Bayesian learning helps maintain efficiency in determining which SA might handle queries;
  • Because MACS is built around SAs, its knowledge is easily upgraded and expanded as defense contracting knowledge changes or expands. Additional SAs can be plugged into MACS with minimal integration effort, and new knowledge can be added to existing SA(s);
  • Because MACS is designed so Bayesian and reinforcement learning occur in separate agents, each agent can be upgraded independently;
  • In the current implementation, all test queries, developed with input from the intended user group, parsed on the first try without rephrasing, thus increasing MACS usability and performance. A total of 26 queries were used to test NLP agent performance; eight (or 31%) of them returned only the ground truth rules. The remaining 18 sentences (69%) returned rules, in addition to the ground truth rules. The additional rules are less relevant to the query but provide useful information to the user; for example, the Contracts agent successfully answered a query. Additional information provided by the Synopsis agent may be useful when the synopsis is submitted;
  • The MACS ability to learn what users are really asking through reinforcement learning serves two purposes: It increases its ability to return meaningful replies to users, and it reduces the need to learn and store new vocabulary and grammar while improving itself with each use;
  • Bayesian learning removes the need to have the Facilitator agent store knowledge about SAs’ expertise and broadcast messages to all agents regardless of relevance. Instead, a single probability value is stored for each SA. Thus, storage requirements are minimized; and
  • In addition to the illustrative example, learning in MACS was evaluated in quantitative terms. The data indicates the BL agent is, in fact, learning which SAs should resolve queries. A total of 135 user sessions served to test MACS, and over time the BL agent more quickly identified the correct SA to which queries should be sent. Initially, an average of three SAs were contacted before identifying an agent with relevant knowledge. Agents with relevant knowledge were identified on the first try 20% of the time in the first 15 test sessions. By the 60th session, the BL agent identified an SA with relevant knowledge on the first try 90% of the time.

Back to Top


The MACS multiagent system is designed for learning, using NL processing to enhance that learning. Both Bayesian learning and the NL interface function with the system architecture to improve system performance. The modular design makes it easy to extend or upgrade as necessary, increasing the useful lifetime of MACS while reducing the burden on human contracting officers.

The MACS features explored here suggest ways in which multiagent systems can become even more useful. They are particularly promising because defense contracting relies heavily on people—an expensive and valuable resource. They can also be extended to other application areas. While MACS is a work in progress, the prototype has served to identify key issues about system performance and provide directions for how to address them.

Back to Top

Back to Top

Back to Top


F1 Figure 1. Unparsable query.

F2 Figure 2. Parsed query and reinforcement learning.

F3 Figure 3. User session with reinforcement learning.

Back to top

    1. Jouffe, L. Fuzzy inference system learning by reinforcement methods. IEEE Trans. Syst. Man. Cybernet. Part C: Appl. Rev. 28, 3 (1998).

    2. MacIntosh, J., Conry, S., and Meyer, R. Distributed automated reasoning: Issues in coordination, cooperation, and performance. IEEE Trans. on Syst., Man, and Cybernet. 21, 6 (1991), 1307–1316.

    3. Maulsby, D. and Witten, I. Teaching agents to learn: From user study to implementation. IEEE Comput. 30, 11 (1997), 36–44.

    4. Nardi, B., Miller, J., and Wright, D. Collaborative, programmable intelligent agents. Commun. ACM 41, 3 (Mar. 1998), 96–104.

    5. Odetayo, M. Knowledge acquisition and adaptation: A genetic approach. Expert Syst. Applic. 12, 1 (1995), 3–13.

    6. Prasad, M. and Lesser, V. Learning situation-specific coordination in cooperative multi-agent systems. Auton. Agents Multi-Agent Syst. 2 (1999), 173–207.

    7. Wang, H., Mylopoulos, J., and Liao, S. Intelligent agents and financial risk monitoring systems. Commun. ACM 45, 3 (Mar. 2002), 83–88.

    8. Yoon, V., Rubenstein Montano, B., Wilson, T., and Lowry, S. Development of a Natural Language Interface for the Multi-Agent Contracting System (MACS). Working paper, University of Maryland, Baltimore County, 2003.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More