How good is your English?
With the expansion of the Higher Education (HE) sector in Pakistan, the need for
pre-sessional English language courses has emerged. Responding to this need,
many universities and HE institutions (predominantly the private ones) have
prefaced their graduate programmes with English for Academic Purposes (EAP)
courses, popularly known as pre-sessional language support programmes.
Since English functions as both the gatekeeper and medium of instruction at the HE
level, these warm-up programmes provide an opportunity for students to improve
their English proficiency levels and in turn their confidence. Literature also
upholds provision of pre-sessional programmes as a good measure of the "equal
opportunities" claim put forth by many reputable universities.
context like Pakistan, where a majority of educated people are not properly
exposed to the academic variety of English language, the ethical and academic
merits of this provision are invaluable. Within the broader domain of English
Language Teaching (ELT), pre-sessional EAP programmes are distinguished by their
philosophy and construct. Besides the many sub-ingredients that feed into their
unique construct, these programmes are essentially needs-specific and
Such a construct preconditions a close fit between the
course outcomes and the language demands as stipulated by the target academic
environment (mostly a postgraduate programme). An undercurrent of this construct
is that "needs analysis" and "assessment planning" precede course design and
delivery, a detail which many language tutors tend to oversee (Banerjee
&Wall, 2006). No doubt, a strong conceptual design complemented by an
extensive repertoire of teaching methodologies could work wonders; yet such
enablers could get neutralised by insufficient consideration given to course
exit assessment during the planning phase.
When the assessment criteria
used in pre-sessional exit tests do not reflect the criteria against which the
students' performance will be judged in academic contexts, the scores achieved
are less likely to capture the students' ability to perform in those contexts.
It is thus very important for English language practitioners to understand
"course exit assessment" as an integral function of pre-sessional EAP course
design. This article attempts to highlight this link while detailing out one
type of exit assessment, i.e., "can do" scales.
To begin with, most HE
institutions specify a minimum level of English language proficiency for
admission to their graduate courses. However, each sets down a different route
by which candidates can satisfy these requirements. There are institutions that
only approve of scores from standardised tests like IELTS, TOEFL, etc., or tests
that have been designed by external agencies for institutional consumption (e.g.
the Institutional TOEFL). Some institutions, inclusive or preclusive of the
standardised test scores require students to take an in-house aptitude test. The
results of this test determine whether the students need further language
support before they begin their academic programme. In yet other institutions
(many examples are found in the UK), students who meet all admission
requirements other than the language-related ones are required to successfully
complete a pre-sessional EAP course.
Likewise, an overview of several
pre-sessional EAP courses establishes that there is no normative approach to
assessing and reporting performance on such courses. In this way, different
universities take on different routes to decide whether students are proficient
enough to begin studying in their academic departments after completion of the
pre-sessional course. Some universities replicate the design of standardised
tests and use these in-house tests as both pre- and post- course measures to
determine the change in the students' proficiency levels. Many institutions
prefer to judge students on their in-course performance, combining internal test
scores, performance on written assignments (projects and course assignments,
collected together in portfolios), formal presentations, and classroom
Upon completion of the pre-sessional EAP course, most
institutions send individual profile reports to the student affairs department
based on results from one or more of the tests taken during the course. These
individual reports could either carry an aggregate grade determined by a
combination of test results and other assessments, or a pass-fail judgment which
is based on a pre-decided cut-off point. Some even indicate students' fate by
means of a simple, straightforward criterion like attendance. One popular exit
assessment route is to have students re-take an external standardised test like
the IELTS or TOEFL.
However, this poses a problem. Even if we look at
writing skills, research studies that have analysed writing demands on
postgraduate degree programmes and those on IELTS find them incomparable (see
Canseco and Byrd, 1989; Horowitz, 1986; and Moore and Morton, 2005). A typical
university assignment requires students to select relevant data from various
sources, reorganise the data in response to the task, and to develop their
response using academic register.
Other writing skills include the
ability to present information in tables and graphs, to interpret and translate
data from visuals into words and to revise and edit drafts. In fulfilling all
these demands, students are rarely required to refer to personal experience.
Exceptions apply where tasks require students to write personal reflections on
their learning experiences, however, the point of reference for these
reflections is an established tradition of already available knowledge (in the
form of literature).
If we look at IELTS writing section, task no 1
aligns with the writing demands placed on university students. On the contrary,
task no 2 typically requires students to set forth an argument while drawing on
their personal experiences instead of asking them to draw on reading sources or
primary data. Moreover, IELTS task no 2 refers to a restricted genre, that of
persuasive writing; whereas university writing tasks are wide-ranged
encompassing reviews, reflection papers, case study reports, research reports,
research proposals, summaries, etc.
Though the above is so specific an
exemplar to be generalised, it does project an idea of incomparability built
into this route. Considering the limitations such routes offer, a recent
development in pre-sessional exit assessment is the use of "can-do" scales.
These consist of lists of performance objectives against which EAP tutors
indicate whether students are able or not to achieve each objective. Cautiously
developed and piloted, such scales provide useful guidance to tutors, students
and the academic administrators. These are more practical and explicitly reflect
characteristics of the EAP construct.
Some aspects to be considered while
planning the "can do" scales are:
Function and audience: The main
function of the individual "can do" scale-report should be to project an
accurate picture of that student's abilities. Students should receive a copy of
the report so that they benefit from receiving a frank account of their
strengths and weaknesses. It is however, not advisable to soften the reports of
students who are not performing adequately. It is the right of students to be
alerted when the light is still auburn.
Coverage: The report form should
explicitly reflect the current EAP theory. It should specify the skills and
strategies that emerge from needs analyses, previous experiences of EAP teachers
and academic tutors and most importantly, analysis of the language demands put
forth by assessed tasks in the main course of study. It is inappropriate to
comment on attitude, aptitude, motivation, awareness or any other quality which
is a feature of personality rather than a linguistic or academic ability. In
other words, the more specific you are the better.
Evidence: It is best
to comment only on features that could be supported by evidence. Moreover, the
limits of judgments should also be made clear, for instance, it should be stated
explicitly if the tasks that students perform during the pre-sessional course
approximate (but do not fully replicate) the demands students are likely to
encounter in their study contexts.
The exit assessment report should only
indicate how students have performed on the tasks during the course rather than
predicting whether they would do well in their future settings. The intention
should be to provide an evidential basis for deciding whether or not a student
is ready to begin studying on a particular programme. This allows academic
administrators for different degree programmes to interpret the evidence
differently; depending on the specific aspects of EAP proficiency they consider
more important for their fields of study.
Format: Such scales are an
alternate to the prose report-format that is difficult and time-consuming for
tutors to write. They are also difficult for the end users (the academic
administrators and students) to interpret. Such "can do" checklists where tutors
only have to place ticks in columns to indicate whether the student did or did
not demonstrate certain abilities are easier to be used and
The following procedure is advisable for devising an exit
• Draw up an exhaustive list of the features of
academic reading, writing, listening and speaking abilities (this could be
ideally achieved after some research)
• Combine some items that are
similar and, in contrast, break down some broad descriptors into sub-categories.
For instance, an item "content" could be broken into two "can do"
sub-categories: "can analyse the topic of the assignment" and "can produce
• Group items according to the language skill they
represent. At times such decisions are complex. For example, "providing
sufficient evidence" is a writing skill but is also dependent on the student's
ability to understand and make use of reading. Such a problem could be resolved
by breaking the skill down into two "can do" components "can analyse
argumentation in academic texts" (classified in the checklist as a reading
skill) and "can reproduce others' ideas using their own words" (classified as a
• Include two columns for each "can-do" descriptor: "yes"
and "must pay attention to". In the first column tick off against the skill/s to
report that the student can do this. The second column should be used to
indicate the skill or strategy a student must work towards to meet the language
• Share the final draft of the checklist with the
admissions administrators and the pre-sessional course tutors to invite their
feedback on it.
• Pilot the checklist with at least 30-40 students. It
would be ideal to distribute the checklist across all student types (strong,
weak, average). On the basis of this pilot procedure, further changes could be
• Maximise opportunities for the course tutors to familiarise
themselves with the checklist to ensure rater familiarisation with the adopted
rating scale. All pre-sessional course tutors should be involved at each point
during the planning and piloting phase to arrive at shared interpretations of
each of the "can-do" descriptors.
• Use the exit assessment checklist
during the last week of the pre-sessional course. Each student should receive
the original report and a copy should be sent to the students' affairs
• Last but most importantly, revisit the checklist after
every two years to check for any emergent changes.
By Fariha Hayat
The writer is a faculty member at a private university (Dawn)
"Pre - sessional English program are the best started in uk for the addmission of undergraduate and postgraduate programmes with the help of English pre sessional programme students are simply target for there courses and their entry requirements as well."
City, Country: Karachi
Post your Feedback about information available on this page.