During interpreting learning, Chinese students mostly assess their interpreting output with referential written translations, leading to limited understanding of their overall interpreting competence. The publication of the China’s Standards of English (CSE) provides students with a convenient tool to evaluate interpreting competence – the Interpreting Self-Assessment Scale (ISS), a subscale of the Interpreting Scale. However, description mismatch is discovered in the descriptors concerning the same typical characteristic at the same level between the ISS and other parts of the Interpreting Scale. Therefore, whether the content of the ISS can measure interpreting competence requires further validation. Based on the above-mentioned problems, this research aims to conduct content validation of the ISS from the perspective of content relevance and content representativeness. And expert judgement, content analysis and contrastive analysis are adopted in this research. The content validation of the ISS provides validity evidence for its use and further validation of the ISS. After the descriptors judged by experts and analyzed by the researcher, suggestions will be offered for the scale improvement. Ultimately, the promotion and application of the ISS will assist students in self-assessment and self-adjustment during the learning process.