I concur wholeheartedly with the composability benefits Brian Beckman outlined in his article "Why LINQ Matters: Cloud Composability Guaranteed" (Apr. 2012) due to my experience using composability principles to design and implement the message-dissemination mechanism for a mobile ad hoc router in a proprietary network. In it, the message-dissemination functionality of the router emerges from the aggregation of approximately 1,500 nodes in a composable tree that resembles a large version of the lambda-tree diagrams in the article. However, instead of being LINQ-based, each node represents a control element (such as if/else, for-loop, and Boolean operations nodes), as well as nodes that directly access message attributes. Each incoming message traverses the composable tree, with control nodes directing it through pertinent branches based on message attributes (such as message type, timestamp, and sender’s location) until the message reaches processing nodes that complete the dissemination.
Since assembling and maintaining a 1,500-node tree within the code base would be daunting, a parser assembles the tree from a 1,300-line routing-rule specification based on a domain-specific language (DSL).a Defining the routing rules through this DSL-assembled composable tree also provides these additional benefits:
Nodes verified independently. Verifying the if/else, message-timestamp, and other nodes can be done in isolation;
Routing rules modified for unit testing. As the routing rules mature, their execution requires a full lab- or field-configuration environment, making it difficult to test new features; a quick simplification of a local copy of the DSL specification defines routing rules that bypass irrelevant lab/field constraints while focusing on the feature being tested on the developer’s desktop;
Scalable and robust. New routing rules can be added to DSL specification; new routing concepts can be added through the definition of new node types; and new techniques can be added to the overall design; and
Each message traversal recorded by the composable tree. Each node in the composable tree logs a brief one-line statement describing what it was doing and why the message chose a particular traversal path; the aggregation of these statements provides an itinerary describing the journey of each message traversal through the composable tree for confirmation or debugging.
My experience with composable trees defined through a DSL has been so positive I would definitely consider using the technique again to solve problems that are limited in scope but unlimited in variation.
Jim Humelsine, Neptune, NJ
Model Dependence in Sample-Size Calculation
We wish to clarify and expand on several points raised by Martin Schmettow in his article "Sample Size in Usability Studies" (Apr. 2012) regarding sample-size calculation in usability engineering, emphasizing the challenges of calculating sample size for binomial-type studies and identifying promising methodologies for future investigation.
Schmettow interpreted "overdispersion" as an indication of the variability of the parameter p; that is, when n Bernoulli trials are correlated (dependent), the variance can be shown as np(1−p) (1+C), where C is the correlation parameter, and when C>0 the result is overdispersion. When the Bernoulli trials are negatively correlated, or C<0, the result is "underdispersion." If the trials are independent, then C=0, corresponding to the binomial model. Bernoulli trials may thus result in overdispersion or underdispersion; in practice, overdispersion is more common due to the heterogeneity of populations/samples.
A widely used approach for modeling an overdispersed binomial model is to consider p as a random variable to account for all uncertainty. A common model for p is the beta distribution that leads to a well-known prototype model for overdispersion, the "beta-binomial distribution." Note this model assumes a particular parametric distribution of the random variable p. However, sample-size calculations based on this paradigm also involve computational challenges; M’Lan et al.1 concluded that choosing the criterion for sample-size determination from the many criteria in the literature is ultimately based on personal taste. Note, too, that Schmettow’s "zero-truncated logit-normal binomial model" follows this scheme. To the best of our knowledge, the Bernstein-Dirichlet process is a promising family for such a modeling framework; a nice feature of the related distribution of p is that any density in (0, 1]
can be approximated by the Bernstein polynomial.
A more common approach is to fix p through widely used methods involving fixed p based on the confidence-interval formulas derived from normal approximations to the binomial distribution requiring an estimate of p as input into the sample-size formula. However, the normal-based-interval approximation is well known for being erratic for small sample sizes, and even for large samples when p is near the boundaries 0 or 1.
Current sample-size-calculation procedures are thus highly model-dependent, so results will differ. This means a universal procedure that works for a particular binomial-type process is as yet nonexistent, and more studies are needed. We hope Schmettow’s article and our discussion here inspire more researchers to take on the subject of sample-size calculation for usability studies.
Dexter Cahoy and Vir Phoha, Ruston. LA
Join the Discussion (0)
Become a Member or Sign In to Post a Comment