Sign In

Communications of the ACM

Research highlights

Liability Issues in Software Engineering: The Use of Formal Methods to Reduce Legal Uncertainties


View as: Print Mobile App ACM Digital Library Full Text (PDF) In the Digital Edition Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
legal language on liability

Credit: iStockPhoto.com

This paper reports on the results of a multidisciplinary project involving lawyers and computer scientists with the aim to put forward a set of methods and tools to (1) define software liability in a precise and unambiguous way and (2) establish such liability in case of incident. The overall approach taken in the project is presented through an electronic signature case study. The case study illustrates a situation where, in order to reduce legal uncertainties, the parties wish to include in the contract specific clauses to define as precisely as possible the share of liabilities between them for the main types of failures of the system.

Back to Top

1. Introduction

Software contracts usually include strong liability limitations or even exemptions of the providers for damages caused by their products. This situation does not favor the development of high-quality software because the software industry does not have sufficient economical incentives to apply stringent development and verification methods. Indeed, experience shows that products tend to be of higher quality and more secure when the actors in position to influence their development are also the actors bearing the liability for their defects.1 The usual argument to justify this lack of liability is the fact that software products are too complex and versatile objects whose expected features (and potential defects) cannot be characterized precisely, and which therefore cannot be treated as traditional (tangible) goods. Admittedly, this argument is not without any ground: It is well known that defining in an unambiguous, comprehensive, and understandable way the expected behavior of systems integrating a variety of components is quite a challenge, not to mention the use of such definition as a basis for a liability agreement. Taking up this challenge is precisely our objective: Our aim is to study liability issues both from the legal and technical points of view and to put forward a formal framework to (1) define liability in a precise and unambiguous way and (2) establish such liability in case of incident. Note that we focus on liabilities for software flaws here and do not consider infringement or any other intellectual property right liabilities which involve very different issues.

Obviously, specifying all liabilities in a formal framework is neither possible nor desirable. Usually, the parties wish to express as precisely as possible certain aspects which are of prime importance for them and prefer to state other aspects less precisely (either because it is impossible to foresee at contracting time all the events that may occur or because they do not want to be bound by too precise commitments). Taking this requirement into account, we provide a set of tools and methods to be used on a need basis in the contract drafting process (as opposed to a monolithic, "all or nothing" approach). Our model is based on execution traces which are abstractions of the log files of the system. In a nutshell, liability is specified as a function taking as parameters a claim and an execution trace and returning a set of "responsible" actors. This set of actors (ideally a singleton) depends on the claim and the errors occurring in the trace. Both errors and claims are expressed as trace properties. The liability function can be made as precise or detailed as necessary by choosing the claims and errors relevant for a given situation.

The presentation of this paper is based on a case study: an electronic signature application installed on a mobile phone. Section 2 describes the starting point (IT system subject to the agreement, parties involved, informal agreement between the parties and legal context); Section 3 presents our formal framework for the definition of liabilities. Section 4 provides a sketch of related work, and Section 5 identifies avenues for further research.

Back to Top

2. Starting Point

Let us consider an electronic signature system allowing an e-commerce company (ECC) to send a document to be signed by an individual on his mobile phone. The signature of the document is subject to the individual's approval (and authentication) and all communications and signature operations are performed through his mobile phone. In a real situation, the activation of the signature system would be preceded by a request from the individual or by a negotiation with the ECC, but we do not consider this negotiation phase here.

The mobile phone itself incorporates a smart card (for the verification of the Personal Identification Number [PIN] of the user) and a signature application. We assume that the mobile phone provider (MPP), the signature application provider (SAP), and the smart card provider (SCP) want to execute an agreement to put such a mobile phone signature solution on the market. In order to reduce legal uncertainties, the parties wish to include in the agreement specific provisions to define as precisely as possible the share of liabilities between them for the main types of failures of the system. Their intention is to use these provisions to settle liability issues in an amicable way by the application of well-defined rules. At this stage, it may be the case that all the components (software and hardware) are already available and the only remaining task is their integration. It may also be the case that some or all the components still have to be developed. In general, no assumption can thus be made on the fact that software components can be designed or modified in a specific way to make the implementation of liabilities easier. The only assumptions made at this stage are:

  • On the technical side: the availability of the functional architecture of the system (interfaces of the components and informal definition of their expected behavior)
  • On the business side: an informal agreement between the parties with respect to the share of liabilities

The objective of the infrastructure described in this paper is to allow the parties to translate this informal agreement into a contract which is both valid in the legal sense and as precise as possible, in particular with respect to technical issues, in order to minimize legal uncertainties. In the remainder of this section, we describe the initial technical and legal situation: the IT system itself (Section 2.1), the actors involved (Section 2.2), the informal agreement between the parties signing the agreement (Section 2.3), and the legal context surrounding the agreement (Section 2.4).

* 2.1. IT System

At the start of the contractual phase, the IT system is usually defined in an informal way by its architecture: its components, their interfaces, expected behaviors, and interactions. In our case study, we assume that the electronic signature system is made of the following components:

  • A Server (Serv)
  • A Signature Application (SigApp)
  • A Smart Card (Card)
  • A Mobile Input/Output (IO) component which gathers the keyboard and the display of the mobile phone (including their drivers)
  • An Operating System (OpSys)

All the components except Serv are embedded in the mobile phone. In this paper, we focus on liabilities related to the mobile phone system and do not consider liabilities related to Serv or the communication network. These liabilities could be handled in the same way by adding the ECC and the telecommunication operator as additional parties. The only functionality of OpSys that we consider here is its role of medium for all communications between the mobile phone components (i.e., between SigApp, Card, and IO).

The architecture of the system and its information flows are pictured in Figure 1. The protocol starts with the ECCECC requesting a signature for document D (message 1). The document is forwarded by Serv and SigApp, and presented to the owner of the mobile phone (OWN) by IO (messages 2, 3, and 4). If OWN refuses to sign, ECC is informed through IO, SigApp, and Serv (messages 5-n, 6-n, 7-n, and 8-n). If OWN agrees, the document and the PIN code entered by OWN are forwarded to Card by SigApp (messages 5-y, 6-y, and 7-y). Next, depending on whether Card authenticates the PIN code or not, the document and the signature produced by Card are sent to ECC via SigApp and Serv (messages 8-y-r, 9-y-r, and 10-y-r), or ECC is informed via SigApp and Serv of the authentication failure (messages 8-y-w, 9-y-w, and 10-y-w). Note that this scenario is a simplified version of a real protocol which is sufficient to illustrate our purpose: for example, in a real system, the PIN code would not be sent in clear for security reasons it would be provided through a hash encoding.

* 2.2. Actors

We assume that the contract is to be executed by the three parties involved in the manufacture and distribution of the signature solution:

  • The Mobile Phone Provider (MPP)
  • The Signature Application Provider (SAP)
  • The Smart Card Provider (SCP)

The owner of the mobile phone (OWN) and the ECC are supposed to execute different contracts with MPP which also plays the role of mobile phone operator. We are concerned only with the B2B contract between MPP, SAP, and SCP here. We come back in Section 2.4 to the legal consequences of including the owner of the mobile phone (OWN) among the parties. In the sequel, we shall use the word "party" for MPP, SAP, and SCP, and the word "user" for the end-users of the system (ECC and OWN).

Each component in the system is provided by one of the parties. In our case, we assume that:

  • The SigApp component is provided by SAP.
  • The Card component is provided by SCP.
  • The IO and OpSys components are provided by MPP.

* 2.3. Informal agreement

The parties wish to define as precisely as possible the share of liabilities between themselves in case of a claim from the owner of the mobile phone. In practice, claims will typically be addressed to MPP because MPP is the only party in direct contact (and contractual relationship) with the owner of the mobile phone (both as a MPP and operator). MPP will have to indemnify the owner of the mobile phone if his claim is valid and may in turn be indemnified by one (or several) of the other parties depending on the type of the claim, the available log files, and the liability share defined in the agreement.

In the following, we assume that each document to be signed is originally stamped by ECC and this stamp is (i) unique, (ii) always included in the messages of a given session, and (iii) never modified. This stamp can be seen as a transaction number which makes it easier to distinguish messages pertaining to different signature sessions. Lifting or relaxing this assumption is possible, but at the expense of a more complex model.

As an illustration, we consider two kinds of claims from the owner of the mobile phone, called DiffDoc and NotSigned, concerning the signature of an alleged document D stamped :

  1. DiffDoc: The plaintiff OWN claims that he has been presented a document D' stamped different from the alleged document D (stamped ). In the case of a purchase order, for example, D and D' may differ with respect to the quantity or price of the ordered items.
  2. NotSigned: The plaintiff OWN claims that he has never been presented any document stamped .

We assume that the parties agree on the following informal share of liabilities for these two types of claims:

  1. If OWN claims that he has been presented a document D' stamped different from the alleged document D (stamped ), then, if this claim is valid
  • a. SAP shall be liable if SigApp has forwarded to OWN a document (stamped ) different from the document received from ECC.
  • b. Otherwise MPP shall be liable.
  • If OWN claims that he has never been presented any document stamped , then, if this claim is valid
    • a. If the smart card has wrongly validated a PIN for document D stamped , then SCP shall be liable.
    • b. Otherwise MPP shall be liable.

    We do not discuss the value or justifications for this informal agreement here and just take it as an example of a possible share of liabilities. It should be clear that this share of liabilities is the result of a negotiation between the parties, based on a combination of technical as well as business and legal arguments, and it does not have to (and usually cannot) be justified formally. For example, the above rules for the liability by default may be found acceptable by MPP because he takes a significant part of the revenues of the business at the expense of bearing the risk in connection with the customer. The point is that the formal framework should not impose any undue constraint on the share of liabilities but should provide means for the parties to express their wishes as precisely as possible.

    * 2.4. Legal context

    Even though the intention of the parties is to settle liability issues in an amicable way, according to well-defined rules, it is obviously necessary to take into account the legal context pursuant to computer systems. Any misconception or overlooking of the legal constraints might lead to contractual clauses that could be invalidated in court, thus increasing rather than reducing legal risks. The two main categories of legal constraints to be considered here concern the two main phases of the process: (1) the formal definition of the share of liabilities among the parties and (2) the analysis of the log files to establish these liabilities after the fact. In the following, we examine these two categories of legal constraints in turn.

    Liability Limitations: The first criterion to be taken into account to assess the validity of contractual liability limitations is the qualification of the parties. For example, most jurisdictions provide specific protections to consumers because they are considered as the weak party in a contract with a professional. Let us first consider contracts involving only professionals. Several cases of invalidity of liability limitation clauses are defined by law. The first obvious cases where the liability limitation would be considered null and void are when the party claiming the benefit of the clause has committed acts of intentional fault, willful misrepresentation, or gross negligence. Another case is the situation where the limitation would undermine an essential obligation of a party and would thus introduce an unacceptable imbalance in the contract. This situation is more difficult to assess though, and left to the appraisal of the judge who may either accept the limitation, consider it null, or even impose a different liability cap.a

    As far as consumers are concerned, the law offers a number of protections which severely restrict the applicability of liability limitation clauses. The philosophy of these rules is that the consumer is in a weak position in the contractual relationship and legal guarantees should be provided to maintain some form of balance in the contract. For example, professionals must provide to their consumers "non-conformance" and "hidden defects" warranties in French law and "implied warranty" (including "merchantability" and "fitness") in the American Uniform Commercial Code. Any clause which would introduce a significant imbalance at the prejudice of the consumer would be considered unconscionable.

    Let us note that we have focused on contractual liability here (liability which is defined in the contract itself): Of course, strict liability (when a defect in a product causes personal or property damages) will always apply with respect to third parties (actors who are not parties to the contract). It is still possible though for professionals to define contractual rules specifying their respective share of indemnities due to a victim (third party) by one of the parties.b

    To conclude this subsection, let us mention other criteria that need to be taken into account to refine the legal analysis, in particular: the qualification of the contract itself (product or service agreement), in case of a product agreement, whether it is qualified as a purchase agreement or a license agreement, the nature of the software (dedicated or off-the-shelf software), the behavior of the actors, etc.

    Log Files as Evidence: The first observation concerning the contractual use of log files is that digital evidence is now put on par with traditional written evidence. In addition, as far as legal facts are concerned (as opposed to legal acts, such as contracts), the general rule is that no constraint is imposed on the means that can be used to provide evidence. As far as legal acts are concerned, the rules depend on the amount of the transaction: for example, no constraint is put on the means to establish evidence for contracts of value less than one thousand and five hundred Euros in France. The logs to be used in the context of our project concern the behavior of software components, which can be qualified as legal facts. Even though they would also be used to establish the existence and content of electronic contracts (as in our case study), we can consider at this stage that their value would be under the threshold imposed by law to require "written evidence" or that the evidence provided by the log files would be accepted as "written evidence" under the aforementioned equivalence principle.

    A potential obstacle to the use of log files in court could be the principle according to which "no one can form for himself his own evidence." It seems increasingly admitted, however, that this general principle allows exceptions for evidence produced by computers. As an illustration, the printed list of an airline company showing the late arrival of a traveler at the boarding desk was accepted as evidence by the French Cour de cassation.c Another condition for the validity of log files as evidence is their fairness and legality. For example, a letter, message, or phone conversation recorded without the sender or receiver knowing it cannot be used against them. As far as our project is concerned, attention should be paid to the risk of recording personal data in log files: In certain cases, such recording might be judged unfair and make it impossible to use the log as evidence in court.

    Generally speaking, to ensure the strength of the log-based evidence provisions in the agreement, it is recommended to define precisely all the technical steps for the production of the log files, their storage, and the means used to ensure their authenticity and integrity. Last but not least, as in the previous subsection, the cases where consumers are involved deserve specific attention with respect to evidence: Any contractual clause limiting the possibilities of the consumer to defend his case by providing useful evidence is likely to be considered unconscionable in court.

    International Law: To conclude this section, let us mention the issue of applicable law. Needless to say, the information technology business is in essence international and, even though we have focused on European regulations in a first stage, more attention will be paid in the future to broaden the scope of the legal study and understand in which respect differences in laws and jurisdictions should be taken into account in the design of our framework. For example, certain types of liability limitations are more likely to be considered as valid by American courts which put greater emphasis on contractual freedom.14

    Back to Top

    3. Formal Specification of Liabilities

    The share of liabilities between the parties was expressed in Section 2.3 in a traditional, informal way. Texts in natural language, even in legal language, often conceal ambiguities and misleading representations. The situation is even worse when such statements refer to mechanisms which are as complex as software. Such ambiguities are sources of legal uncertainties for the parties executing the contract. The use of formal (mathematical) methods has long been studied and put into practice in the computer science community to define the specification of software systems (their expected behavior) and to prove their correctness or to detect errors in their implementations. For various reasons however (both technical and economical), it remains difficult to apply formal methods at a large scale to prove the correctness of a complete system.

    In contrast with previous work on formal methods, our goal here is not to apply them to the verification of the system itself (the mobile phone solution in our case study) but to define liabilities in case of malfunction and to build an analysis tool to establish these liabilities from the log files of the system.

    It should be clear however, as stated in Section 1, that our goal is not to provide a monolithic framework in which all liabilities would have to be expressed. The method proposed here can be used at the discretion of the parties involved and as much as necessary to express the liabilities concerning the features or potential failures deemed to represent the highest sources of risks for them.

    In this section, we present successively the parameters which are used to establish liabilities (Sections 3.1 and 3.2) before introducing the liability function in Section 3.3.

    * 3.1. Trace model

    Following the informal description in Section 2, the sets of components, parties, and users are defined as follows:

    ins01.gif

    is the set of stamps and C the set of communicating entities (components and users). O and M denote, respectively, the set of communication operations and message contents. The distinction between send and receive events allows us to capture communication errors.d

    ins02.gif

    We assume that signature sessions in traces are complete and the type (document, response, PIN code, and signature) of each element composing a message is implicitly associated with the element itself in order to avoid any ambiguity.

    We denote by Traces the set of all traces, a trace T being defined as a function associating a stamp with a list of items. Each item is defined by the communication operation (Send or Receive), the sender, the receiver, and the content of the message:

    ins03.gif

    A first comment on the above definition is the fact that we use a functional type (from stamps to lists of items) to represent traces. This choice makes the manipulation of traces easier in the sequel because we are always interested in the items corresponding to a given session. Other representations could have been chosen as well, such as lists of items including the stamp information.

    Let us note that, in order to make the mathematical definitions and the reasoning simpler, the notions used in this section represent an abstraction of the real system. Therefore, we use the term "trace" here and keep the word "log" to denote the actual information recorded by the system. The link between traces and logs is described by Le Métayer et al.12

    * 3.2. Trace properties

    We present successively the two types of trace properties used in this paper: error properties and claim properties.

    Error Properties: The most important parameter to determine the allocation of liabilities is the nature of the errors which can be detected in the log files of the system. Ideally, the framework should be general enough to reflect the wishes of the parties and to make it possible to explore the combinations of errors in a systematic way. One possible way to realize this exploration is to start with a specification of the key properties to be satisfied by the system and derive the cases which can lead to the negation of these properties.

    Our goal being to analyze log files, we characterize the expected properties of the system directly in terms of traces. For example, the fact that SigApp should send to IO the document D received from Serv (and only this document) can be expressed as follows:

    ins04.gif

    In the scenario considered here, the systematic study of the cases of violation of this property leads to the following errors:

    ins05.gif

    ins06.gif

    Space considerations prevent us from presenting this systematic derivation here. It relies on a decomposition of the negation of the properties into disjunctive normal form and selective application of additional decomposition transformations for "nonexistence" properties. The terms of the above disjunction correspond to three typical types of errors:

    1. The first term defines a case where a message is sent with content different from expected.
    2. The second term is a case of expected message which is not sent.
    3. The third term is a case where an unexpected message is sent.

    Similarly, the negation of the property that Card returns a signature only when it has received OWN's PIN code POWN leads to several errors of the three aforementioned types, from which we assume that only two are deemed relevant by the parties. The first one, Card-WrongVal, describes a case where an approval and a signature are sent by Card even though it has not received a right PIN code:

    ins07.gif

    The second one, Card-WrongInval, defines a case where Card refuses to sign a document even though it has received the correct PIN code POWN:

    ins08.gif

    Needless to say, errors can also be defined directly based on the parties' understanding of the potential sources of failure of the system and their desire to handle specific cases. The derivation method suggested here can be used when the parties wish to take a more systematic approach to minimize the risk of missing relevant errors.

    Last but not least, the language used to express properties for this case study is relatively simple as it does not account for the ordering of items in traces. In general, richer logics may be needed, for example, to express temporal properties. The choice of the language of properties does not have any impact on the overall process, but it may make some of the technical steps, such as the analysis of the log architecture, more or less difficult.13

    Claim Properties: Claim properties represent the "grounds" for the claims of the users: They correspond to failures of the system as experienced by the users. In practice, such failures should cause damages to the user to give rise to liabilities, but damages are left out of the formal model. Claims can thus be expressed, like errors, as properties on traces. We consider two claim properties here, Claim-DiffDoc and Claim-NotSigned, which define the grounds for the claims introduced in Section 2.3:

    ins09.gif

    The first definition defines a claim corresponding to a case where OWN has been presented a document D' with stamp (as indicated by (Receive, SigApp, IO, [D']) different from the document D sent by the signature application to the server (message (Send, SigApp, Serv, [Yes; D; S])). The second definition defines a claim corresponding to a case where the signature application has sent to the server a message (Send, SigApp, Serv, [Yes; D; S]) indicating that OWN has signed a document stamped when OWN has never been presented any document stamped .

    * 3.3. Liability function

    The formal specification of liabilities can be defined as a function mapping a claim, a trace, and a stamp onto a set of partiese:

    ins10.gif

    where Claims = {DiffDoc, NotSigned}.

    As an illustration, the following function captures the share of liabilities introduced in Section 2.3:

    ins11.gif

    The two cases in Liability correspond to the two types of claims considered in Section 2.3. For each type of claim, the goal of the first test is to check the validity of the claim raised by OWN. If OWN raises a claim which is not confirmed by the trace, then the result of Liability is the empty set because no party has to be made liable for an unjustified claim. Otherwise, if OWN claims to have been presented a document D' different from the alleged document D, then SAP is liable if SigApp has forwarded to OWN a document (stamped ) different from the document received from ECC (SigApp-Diff(T, )); otherwise, MPP is liable. Similarly, if OWN's claim is that he has never been presented any document stamped , then SCP is liable if the smart card has wrongly validated a PIN in session (Card-WrongVal(T, )); otherwise MPP is liable.

    Back to Top

    4. Related Work

    The significance of liability, warranty, and accountability and their potential impact on software quality have already been emphasized by computer scientists as well as law-yers.1, 4, 21, 22 However, we are not aware of previous work on the application of formal methods to the definition of software liability. Earlier works on the specification of contracts mostly deal with obligations in a general sense6, 9, 19 or with specific types of contracts such as commercial contracts or privacy rules11, 18 but do not address liabilities related to software errors.

    Service Level Agreements also define contractual provisions but generally focus on Quality of Service rather than functional requirements and do not put emphasis on formal specifications. A notable exception is the SLAng language24, 25 which is endowed with a formal semantics and can be used to specify a variety of services such as Application Service Provision, Internet Service Provision, or Storage Service Provision. In addition, the monitorability and monitoring of SLAng services have been considered both from the formal and practical points of view.26

    Several other connected areas share part of our objectives such as software dependability,3 model-based diagnosis,5, 16 intrusion detection,10 forensics,2, 20 and digital evidence.7, 27, 28 Needless to say, each of these areas is a useful source of inspiration for our project, but we believe that none of them, because of their different objectives, provides the answer to the key problem addressed in this paper, namely, the formal specification and instrumentation of liability.

    Back to Top

    5. Conclusion

    As stated in Section 1, the goal of this paper is to provide an overview of our approach through a case study. Because the presentation is targeted toward the case study rather than stated in general terms, a number of technical issues and options have not been described here. In this section, we suggest some generalizations and avenues for further research.

    The notions of traces and properties have been presented in a somewhat simplistic way in this paper. In general, the parties may wish to define liabilities in a more flexible way and establish a link between the erroneous execution of certain components and the failures of the system (and resulting damages). To address this need, we have defined a concept of "logical causality" in Goesseler et al.8 Causality plays a key role in the legal characterization of liability. It has also been studied for a long time in computer science, but with quite different perspectives and goals. In the distributed systems community, causality is seen essentially as a temporal property. In Goesseler et al.,8 we have defined several variants of logical causality allowing us to express the fact that an event e2 (e.g., a failure) would not have occurred if another event e1 had not occurred ("necessary causality") or the fact that e2 could not have been avoided as soon as e1 had occurred ("sufficient causality"). We have shown that these causality properties are decidable and proposed trace analysis procedures to establish them. These notions of causality can be integrated in the framework presented here to increase the expressive power of the language used for the definition of liability functions.

    Another aspect which requires further consideration is the nature of the evidence itself and the values stored in the logs. First, it may be the case that some values, e.g., for privacy or security reasons, must not be recorded in the logs. It may also be the case that not all the relevant information is included in the log files of the system. For example, in our case study, the fact that the owner of the mobile phone has declared the theft of his device or has signed an acknowledgment receipt for a product sent by the ECC can be useful information to analyze the situation (depending on the liability rules decided by the parties). Traces can thus be more than abstract versions of the log files and include other types of actions from all the actors involved.

    The simple case study used to illustrate this paper did not allow us to discuss the issues related to distributed systems. A key design choice in this context is the distribution of the log files themselves. Indeed, recording log entries on a device controlled by an actor who may be involved in a claim for which this log would be used as evidence may not be acceptable for the other parties. In Le Métayer et al.,13 we introduce a framework for the specification of log architectures and propose criteria to characterize "acceptable log architectures." These criteria depend on the functional architecture of the system itself and the potential claims between the parties. They can be used to check if a log architecture is appropriate for a given set of potential claims and to suggest improvements to derive an acceptable log architecture from a non-acceptable log architecture. On the formal side, we have shown that, for a given threat model, the logs produced by acceptable log architectures can be trusted as evidence for the determination of liabilities: Technically speaking, any conclusive evaluation of a claim on these logs returns the same answer as the evaluation of the claim on the sequence of real events.

    As far as the log analysis itself is concerned, we have proposed a formal specification of the analyzer using the B method in Mazza et al.15 and we have shown the correctness of an incremental analysis process. This result makes it possible to build on the output of a first analysis to improve it by considering additional logs or further properties.

    To conclude, we should stress that the methods and tools provided by our framework can be applied in an incremental way, depending on the wishes of the parties, the economic stakes, and the timing constraints for drafting the contract.

    1. The first level of application is a systematic (but informal) definition of liabilities in the style of Section 2.3.
    2. The second level is the formal definition of liabilities as presented in Section 3.3. This formal definition itself can be more or less detailed and encompass only a part of the liability rules defined informally. In addition, it does not require a complete specification of the software but only the properties relevant for the targeted liability rules.
    3. The third level is the implementation of a log infrastructure or the enhancement of existing logging facilities to ensure that all the information required to establish liabilities will be available if a claim is raised. An example of implementation of our framework is described in Le Métayer et al.,12 which defines a log architecture for OSGi.
    4. The fourth level is the implementation of a log analyzer to assist human experts in the otherwise tedious and error-prone log inspection.
    5. A fifth level would be the verification of the correctness of the log analyzer with respect to the formal definition of liabilities (considering the correspondence between log files and traces). This level would bring an additional guarantee about the validity of the results produced by the system.

    Each of these levels contributes to reducing further the uncertainties with respect to liabilities and the parties can decide to choose the level commensurate with the risks involved with the potential failures of the system.

    Last but not least, we are currently working on two other key issues related to log files which have not been discussed here: their optimization in terms of storage (compaction, retention delay, etc.) using an index-based factorization method and techniques to ensure their authenticity and integrity17, 23 including trusted serialization of log items.

    Back to Top

    Acknowledgments

    We would like to thank all the members of the LISE project, who have, through many discussions and exchanges, contributed to the work described here, in particular Christophe Alleaume, Valérie-Laure Benabou, Denis Beras, Christophe Bidan, Gregor Goessler, Julien Le Clainche, Ludovic Mé, and Sylvain Steer.

    Back to Top

    References

    1. Anderson, R., Moore, T. Information security economicsand beyond. Information Security Summit (IS2) (2009).

    2. Arasteh, A.R., Debbabi, M., Sakha, A., Saleh, M. Analyzing multiple logs for forensic evidence. Dig. Invest. 4 (2007), 8291.

    3. Avizienis, A., Laprie, J.-C., Randell, B. Fundamental concepts of computer system dependability. In IARP/IEEE-RAS Workshop on robot dependability: technological challenges of dependable robots in human environments (2001).

    4. Berry, D.M. Appliances and software: The importance of the buyer's warranty and the developer's liability in promoting the use of systematic quality assurance and formal methods. In Proceedings of the Workshop on Modeling Software System Structures in a Fastly Moving Scenario (Santa Margherita Ligure, Italy, 2000); http://www.montereyworkshop.org/PROCEEDINGS/BERRY/

    5. Brandan-Briones, L., Lazovik, A., Dague, P. Optimal observability for diagnosability. In International Workshop on Principles of Diagnosis (2008).

    6. Farrell, A.D.H., Sergot, M.J., Sallé, M., Bartolini, C. Using the event calculus for tracking the normative state of contracts. Int. J. Coop. Inform. Sys. (IJCIS) 14, 23 (2005), 99129.

    7. Gladyshev, P. Enbacka, A. Rigorous development of automated inconsistency checks for digital evidence using the B method. Int. J. Dig. Evidence (IJDE) 6, 2 (2007), 121.

    8. Goessler, G., Raclet, J.-B., Le Métayer, D. Causality analysis in contract violation. In International Conference on Runtime Verification (RV 2010), LNCS 6418 (2010), 270284.

    9. Governatori, G., Milosevic, Z., Sadiq, S.W. Compliance checking between business processes and business contracts. In EDOC. IEEE Computer Society (2006), 221232.

    10. Jones, A.K., Sielken, R.S. Computer System Intrusion Detection: A Survey, TR, University of Virginia Computer Science Department, 1999.

    11. Le Métayer, D. A formal privacy management framework. In Formal Aspects of Security and Trust (FAST), Springer Verlag, LNCS 5491 (2009), 162176.

    12. Le Métayer, D., Maarek, M., Mazza, E., Potet, M.-L., Frénot, S., Viet Triem Tong, V., Craipeau, N., Hardouin, R. Liability in software engineeringOverview of the LISE approach and illustration on a case study. In International Conference on Software Engineering, Volume 1. ACM/IEEE (2010), 135144.

    13. Le Métayer, D, Mazza, E, Potet, M.-L. Designing log architectures for legal evidence. In International Conference on Software Engineering and Formal Methods (SEFM 2010). IEEE (2010), 156165.

    14. Lipovetsky, S. Les clauses limitatives de responsabilité et de garantie dans les contrats informatiques. Approche comparative France/ États-Unis. Quelles limitations. Expertises des systèmes d'information, no. 237 (May 2000), 143148.

    15. Mazza, E, Potet, M.-L., Le Métayer, D. A formal framework for specifying and analyzing logs as electronic evidence. In Brazilian Symposium on Formal Methods (SBMF 2010) (2010).

    16. Papadopoulos, Y. Model-based system monitoring and diagnosis of failures using statecharts and fault trees. Reliab. Eng. Syst. Safety 81 (2003), 325341.

    17. Parrend, P. Frénot, S. Security benchmarks of OSGi platforms: Toward hardened OSGi. Softw.Prac. Exp. (SPE) 39, 5 (2009), 471499.

    18. Peyton Jones, S.L., Eber, J.-M. How to write a financial contract. In The Fun of Programming, Cornerstones of Computing, chapter 6, 2003.

    19. Prisacariu, C., Schneider, G. A formal language for electronic contracts. In FMOODS, Springer, LNCS 4468 (2007), 174189.

    20. Rekhis, S., Boudriga, N. A temporal logic-based model for forensic investigation in networked system security. Comput. Netw. Security 3685 (2005), 325338.

    21. Ryan, D.J. Two views on security vand software liability. Let the legal system decide. IEEE Security Privacy (JanuaryFebruary 2003).

    22. Schneider, F.B. Accountability for perfection. IEEE Security Privacy (MarchApril 2009).

    23. Schneier, B., Kelsey, J. Secure audit logs to support computer forensics. ACM Trans. Inform. Syst. Security (TISSEC) 2, 2 (1999), 159176.

    24. Skene, J., Lamanna, D.D., Emmerich, W. Precise service level agreements. In ACM/IEEE International Conference on Software Engineering (ICSE), IEEE (2004), 179188.

    25. Skene, J., Raimondi, F., Emmerich, W. Service-level agreements for electronic services. IEEE Tran. Software Eng. (TSE) 36, 2 (2010), 288304.

    26. Skene, J., Skene, A., Crampton, J., Emmerich, W. The monitorability of service-level agreements for application-service provision. In International Workshop on Software and Performance (WOSP), ACM (2007), 314.

    27. Solon, M., Harper, P. Preparing evidence for court. Digit. Invest. 1 (2004), 279283.

    28. Stephenson, P. Modeling of post-incident root cause analysis. Digit. Evidence 2, 2 (2003).

    Back to Top

    Authors

    Daniel Le Métayer and Manuel Maarek, INRIA, Grenoble Rhône-Alpes, France.

    Eduardo Mazza and Marie-Laure Potet, VERIMAG, University of Grenoble, France.

    Stéphane Frénot, INRIA, Grenoble Rhône-Alpes, and INSA, Lyon, France.

    Valérie Viet Triem Tong, SSIR, Supélec Rennes, France.

    Nicolas Craipeau, PrINT, University of Caen, France.

    Ronan Hardouin, DANTE, University of Versailles, Saint-Quentin, France.

    Back to Top

    Footnotes

    a. The "Faurecia case" illustrates the different interpretations of the notion of "essential contractual obligation" in France. In June 2010, the chamber of commerce of the final court of appeal ("Cour de cassation") has rejected a referral of the case to the court of appeal which had itself declared that the limitation of liability was not in contradiction with the essential obligation of the software provider (Oracle) because the customer (Faurecia) could get a reasonable compensation. The philosophy of the decision is that the overall balance of the contract and the behavior of the parties should be considered to decide upon the validity of liability limitations.

    b. In European laws, the victim of a defect caused by a product can sue any of the actors involved in the manufacturing or distribution of the product.

    c. Cass. civ. 1re, July 13th. 2004: Bull. civ. 2004, I, no. 207.

    d. This feature is not illustrated in this paper.

    e. P(S) denotes the set of all subsets (powerset) of S.

    This paper presents the results of the LISE (Liability Issues in Software Engineering) project funded by ANR (Agence Nationale de la Recherche) under grant ANR-07-SESU-007. A previous version of this paper was published in the Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering, ICSE 2010.

    DOI: http://doi.acm.org/10.1145/1924421.1924444

    Back to Top

    Figures

    F1Figure 1. Signature protocol.

    Back to top


    ©2011 ACM  0001-0782/11/0400  $10.00

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from permissions@acm.org or fax (212) 869-0481.

    The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.