Public speaking is regarded an important competence under the general umbrella of communication. It thus has been a common assessment type in university, since it is closely related to the authentic needs of students’ various future working situations (Elfering, Grebner, and Wehr 2012). The assessment of public speaking is far from a new topic and a considerable amount of efforts has gone into the development of its assessment rubrics with examples being the Public Speaking Competence Rubric (PSCR) developed by Schreiber et al (2012). There are however a few fundamental questions surrounding this generic skill that remain unclear and subject to a number of measurement issues.
In Chinese context, public speaking course has also been incorporated into language curriculum in universities among which few has examined the skill using a validated rubric. The paper discusses the development and validation of a public speaking rubric for a high-stake university English speaking assessment. Test of Oral Proficiency in English by Renmin University of China (RUC-TOPE). Public speaking is examined as one of its three sub-tests. Students are required to deliver a public speech on a given topic within 3 minutes after a 10-minute preparation. Students were marked by two raters over the same task. To do the validation systematically, Knoch and Chapelle’s argument-based framework (2017) was drawn on for accruing and evaluating evidence. The study also examined its effectiveness to distinguish students at different levels of speaking proficiency.
The study concludes with the exploration of public speaking as construct by looking at issues of reliability and validity and some common sources of measurement bias. Insights are also drawn in terms of curriculum design and assessment of speaking skills at tertiary level.