Testable, Reusable Units of Cognition

Michela Pedroni

Chair of Software Engineering, Department of Computer Science, ETH Zurich

Introduction

Curriculum and course planning is a key step in developing quality educational programs, but current practice very often lacks a systematic approach. In particular, it is difficult to ensure both that the course covers the entire subject area and that the students’ background meets the prerequisites. There is a clear need for a methodology and tools for curriculum design that help with:

The tasks listed above are mainly contained in two activities: (1) extracting the knowledge taught in courses/textbooks/curricula as a list of units of knowledge and comparing it to the intended results or knowledge acquired from another source, and (2) defining the dependencies that occur between units of knowledge and verifying that they are met. For both of these tasks, a more formal understanding of curricula is essential.

The main goal of my PhD is to develop a methodology that provides an engineering approach to course planning. In particular, tool support is needed to make a systematic approach appealing to instructors.

Previous research in the area

Related work can be found in the areas of curriculum design and instructional/learning design. The Curriculum Initiative CC2001 [1] defines the body of knowledge for Computer Science by listing the core concepts belonging to a specific area. While these efforts have a great impact on curricular planning, they don’t specify how compliance to the course definitions (as reported in CC2001) can be assessed. Furthermore, CC2001 is mostly used for defining computing curricula; for course planning or textbook writing the units of knowledge provided are too coarse-grained and miss information about dependencies and other relationships between topics. Learning design approaches [3, 4] on the other hand target networked and distance education. The main focus of existing tools is to produce interoperable, adaptive e-learning courses. The solutions provided by these tools are overly complex for traditional courses and do not address course comparison.

Goals of the research

Our approach relies on the idea of Truc (Testable, Reusable Unit of Cognition) [5]. A Truc (Testable, Reusable Unit of Cognition) is “a collection of concepts, operational skills and assessment criteria” [5] and is described in a standardized scheme containing sections on the Truc’s name, any alternative names, its dependencies, and a summary. The standard scheme also includes sections on the role of the Truc in the field, when and where it can be applied, its benefits; it gives examples, states the common confusions that might occur when mastering the concepts represented by the Truc, and the disadvantages of applying the concepts. The Truc’s specification concludes with a set of typical tests for assessing understanding of the Truc’s concepts. Trucs can be used both for instructors to model the knowledge units taught in their course and by students as a summary of the course contents.

Trucs allow to capture the content of educational material and the dependencies define the edges in a graph of Trucs. With the help of this graph we can define lessons as paths in the graph of Trucs and courses as a sequence of such paths. These then can be used for checking prerequisites and for comparing course contents of two courses or a course and its sillabus.

The final goals of my thesis are:

Current status and open issues

At the moment, I am working on a first prototype of the tool that will support Truc generation and course planning. In parallel to this work, I am investigating needed extensions to the idea of Truc such as (1) the appropriate variants of relations between Trucs that help capturing prerequisite structures, and (2) finer grained entities to capture small differences in course contents. Furthermore, I started developing an initial set of Trucs covering part of the course contents of Introduction to Programming. Open issues at the moment are:

Current stage in my program of study

I submitted my research plan in early summer 2005. I plan to defend my PhD until the end of 2007/beginning of 2008.

What I hope to gain from participating in the Doctoral Consortium

I attended the DC last year and it was the highlight of the conference for me. I left with a better idea of what my work actually is, dozens of useful hints, and reading pointers that helped me very much in advancing and continuing my work. This year I hope to gain again insights into the interesting topics that PhD students are working on, I hope to meet other doctoral students that share an interest in CS education, and I hope to get useful feedback on the direction that my work currently is taking.

Bibliographic references

[1] The Joint Task Force on Computing Curricula 2005: Computing curricula 2005 (draft). April 4 2005: http://www.acm.org/education/Draft_5-23-051.pdf.
[2] The Joint Task Force on Computing Curricula: Computing Curricula 2001 (final report). December 2001: http://www.sigcse.org/cc2001/.
[3] R. Koper and C. Tattersall, 2005: Learning Design: a Handbook on Modelling and Delivering Networked Education and Training. Springer-Verlag, New York, Inc.
[4] G. Paquette. Meta-knowledge representation for learning scenarios engineering. In Proceedings of AI-Ed99, AI and Education, open learning environments. Amsterdam, June 1999. IOS Press.
[5] B. Meyer, Testable, Reusable Units of Cognition. IEEE Computer 39 (4): 20-24 (2006) April 2006: http://doi.ieeecomputersociety.org/10.1109/MC.2006.141.