The theory of computability was launched in the 1930s, by a group of logicians who proposed new characterizations of the ancient idea of an algorithmic process. The most prominent of these iconoclasts were Kurt Gödel, Alonzo Church, and Alan Turing. The theoretical and philosophical work that they carried out in the 1930s laid the foundations for the computer revolution, and this revolution in turn fueled the fantastic expansion of scientific knowledge in the late 20th and early 21st centuries. Thanks in large part to these groundbreaking logico-mathematical investigations, unimagined number-crunching power was soon boosting all fields of scientific enquiry. (For an account of other early contributors to the emergence of the computer age see Copeland and Sommaruga.9)
The motivation of these three revolutionary thinkers was not to pioneer the disciplines now known as theoretical and applied computer science, although with hindsight this is indeed what they did. Nor was their objective to design electronic digital computers, although Turing did go on to do so. The founding fathers of computability would have thought of themselves as working in a most abstract field, far from practical computing. They sought to clarify and define the limits of human computability in order to resolve open questions at the foundations of mathematics.
In my opinion, this article is long overdue. The Organisational Semiotics community, which may also be lumped into the last paragraph of "alternative" approaches, and of which I see myself part of, sees itself outside of the field of Computer Science and firmly within Informatics: it is less silicon, more society focused. But it is still a computing subject, with computability issues to be solved.
For example, my own research on utterance understanding, which allows us to effectively interact vocally with software (https://www.youtube.com/watch?v=HsJyrdtk0GM), was something considered impossible when I entered computing in the mid-1980s: a language had to be defined by a syntax tree; and, something that can mean anything loses any value. Sure, there was ELIZA and her derivatives; however, these only make a limited understanding of utterances, and are only concerned with maintaining a conversation, rather than acting as a user interface. Now, there are search front-ends, but these still don't replace the graphical trope. Natural language, because of its autopoietic nature, is used itself as the programming language; anyone who uses a recipe or reads the rules of a game might not find this too surprising. Utterance understanding, mediation, rides atop a medium of interpretation. This technology was more inspired Charles Sanders Peirces triadic Semiotic; Iris Murdoch's Inner Voice; the approaches to language of J. L. Austin and P. B. Andersen; and, by Winograds Blocks World program SHRDLU; than it was by the need to adhere to a necessarily Turing complete model.
However, while this work has huge implication for the blind-and-partially sighted community, the illiterate and other technologically disenfranchised groups, as well as for the internet of things and size and battery life of small devices, I dont expect my work will be seen as fundamental in the same way as Church-Turing model. Perhaps, as this article seems to highlight, this is because we are more swayed by what we have seen and done, rather than the possibilities of our imagination? This article gives me hope that my work may be of use, and that there are plenty of others out there, like me, with their own approach.
For me, the Turing-Church model is the extensible model of function which maps, through a compiler or interpreter, onto a sufficiently large tape holding data and atomic instructions. This is the abstraction of the process address space, and high-level programming languages. These have developed over the years, to include assignment, control structures, and classes. Because software projects typically contain thousands (millions?) of lines of code, the pursuit of larger and larger abstractions are sought, to simplify a complex written entity.
One issue with this is the translation between the forms of language. The outer form is a composite textual description, allowing user-applicable functions to be mapped onto more fundamental functions, eventually mapping onto either basic system functions, or onto implementation as variables and their assignment. This is all converted /en masse/ into a binary form. To some extent the monolithic structure this creates is alleviated by the use of interpreted languages, which give is an immediacy; however, another issue is the requirement for further level: a user interface to allow access to these internal APIs. Fine for able-bodied computer scientists happy with a command-line interface; not so, for (many sections of) the general public.
My line of enquiry is a unitary approach: to 'programming' by the mapping of arrays of arbitrary, large, non-zero numbers. In hindsight, this work is (partly) inspired by Godel Numbers; however, the important distinction is that my numbers are composite (arrays). They form the input from the user, as they can be generated vocally, through publically available interfaces, such as from Android. They can be matched to a meaning by a simple pattern matching algorithm, with a simple disambiguation algorithm to deal with multiple matches. They are also the basis for inner interpretation: everything is an utterance. These utterances are organised into groups: concepts, which are built on each other in a composite style, similar to Church's Lambda Calculus. This does not equate to text understanding, which seems to attract much criticism, but it does allow users to interact with their software via speech. An early prototype of a typical concept mashup can be seen at biy.ly/enguage
Displaying all 2 comments