Credit: Andrij Borys Associates, Stutterstock
The theory of computation is one of the crown jewels of the computer science curriculum. It stretches from the discovery of mathematical problems, such as the halting problem, that cannot be solved by computers, to the most celebrated open problem in computer science today: the P vs. NP question. Since the founding of our discipline by Church and Turing in the 1930s, the theory of computation has addressed some of the most fundamental questions about computers: What does it mean to compute the solution to a problem? Which problems can be solved by computers? Which problems can be solved efficiently, in theory and in practice?
Yet computational theory occupies an ambiguous role in the undergraduate curriculum. It is a required core course for the computer science major at many institutions, whereas at many others it is an upper-level elective. And whether required or not, the theory course can have a reputation as an austere and perhaps even irrelevant niche, disconnected from the skills and ideas that comprise computer science. This is not a new phenomenon, and in recent decades the CS community has worked diligently to improve the accessibility and perceived relevance of the theory course. Notable contributions include the JFLAP software for experimentation with automata,8 and various efforts to promote "NP-completeness for all" via visualizations and practical laboratory exercises.1
I believe this is a common transition that has been going on for at least 20 years.
For a complexity class for computer scientists, one could even take this a step further and not even talk about classes P or NP (or the hierarchies beyond). Instead just postulate that SAT is a difficult (exponential time) problem, show the standard reductions to 3SAT, Vertex Cover, CLIQUE, etc. using standard programming languages as you have done.
Then spend a little time on the reductions between search problems and optimization problems (usually easy one way and requiring a binary search approach the other way) illustrating the computational relationship between these.
For the majority of CS students, the mathematical beauty of completeness arguments and problem classification has limited interest, they are much more interested in differentiating between what is hard and what is easy. Doing this using language of programming concepts they are already familiar with and going directly to the core of the complexity problem simplifies the learning task considerably.
MacCormick's viewpoint is a timely discussion of the value of the Theory
of Computation and how one can offer it in innovative ways. I would like
to point out two developments that might be of widespread interest.
The first is a tool called Automaton Tutor that automatically grades problems
and also empowers one to generate new
problem instances (see https://link.springer.com/chapter/10.1007/978-3-030-53291-8_1).
It has been widely used, with the above article including many testimonials.
The second is a tool called Jove (https://github.com/ganeshutah/Jove.git) that
provides Jupyter notebooks covering all traditional Theory of Computation topics
plus many more timely additions (e.g., Binary Decision Diagrams used in Formal
Methods - viewable as minimal DFA). It introduces context-free parsing in a hands-on
manner through a calculator, asking students to extend it with new operators.
Jove does not require any software installation: it can be run directly off its
Github page on Google's Colab.
Jove includes the best of JFLAP (multiple animation modes directly within notebook
cells) and a REPL interface
(scriptable executions that construct and display large automata). Its largely
declarative code adds to one's understanding. One can verify machine constructions
through animations, scripted executions, and overall rigorous checks (e.g., isomorphism
of minimized DFA).
Jove includes important topics lost in the mists of time: Brzozowski's DFA
minimization which is one line of Python code ("Reverse; Determinize; Reverse;
Determinize") and Brzozowski's Derivatives to deal with extended regular expressions.
The incremental rule-based nature of Derivatives is reminiscent of calculus rules,
and also fits directly into how one writes Structural Operational Semantic rules.
An entire undergraduate course in the form of weekly Jupyter notebook-based exercises
is available and has especially helped during the distance-mode of education.
In conclusion, I wholeheartedly agree with MacCormick; this subject is far from
being stodgy; rather, it is a springboard for many a future teaching adventure.
Automatically learning DFA from examples and Markov Models are a small step away!
Ganesh Gopalakrishnan
Professor, School of Computing, University of Utah
Displaying all 2 comments