The “Problem” of L2 Writers in College Composition Placement: Identity, Outcomes, and the Future of Directed Self Placement

In response to the rapidly growing number of international and multilingual students on U.S. college campuses, many colleges’ first year composition (FYC) courses have expanded to include classes designated for “ESL” or “International” students, with increasing scholarship on the best pedagogical practices for such classes. Despite increasing scholarship on the best pedagogical practices for such classes, the implications of placing students into them remains “the thorniest of issues” (Crusan, 2011), with mounting debate as to how to measure L2 students’ suitability for either “ESL” or “mainstream” writing classes, as well as the role students should play in labelling themselves “ESL” writers. Directed Self-Placement (DSP) is emerging as a more equitable and anti-racist alternative to test-based placement; however, there are concerns about its suitability for L2 writers specifically (Crusan, 2011; Ferris, 2017). This literature review therefore aims to address two questions: (1) what are the unique challenges of placing L2 writers in FYC courses? Specifically, what are the implications of the identity labelling inherent in placing L2 students in courses designated as “ESL” or for “International Students” with regard to student investment and learning outcomes? (2) Might DSP offer a viable alternative to traditional L2 placement testing that addresses such implications?


INTRODUCTION
First-year composition (FYC) courses, while not omnipresent, are prolific across U.S. undergraduate colleges and universities, having long-served as a gate-keeping requirement students must pass on their path to graduation.The demographics of the FYC classroom, however, have been rapidly changing.With the consistent growth in the number of international students enrolling in U.S. institutions of higher education since the 1950s, FYC classrooms have become sites of increasing linguistic diversity.In the school year ending in 2017, a record high of 1.1 million international students were enrolled in U.S. colleges, making up 5% of the total student body in the country (Zong & Batalova, 2018).Attending to the educational needs of such students in FYC classrooms has become the subject of a fast-growing body of scholarship, calling into question long-standing "best practices" in a learning environment that is less and less homogenous, less and less "American." The conversation about how best to support a linguistically diverse student body is not new.In 1974, the Executive Committee for the Conference on College Composition and Communication (CCCC) adopted a resolution affirming "Students Right to Their Own Language."The statement, undoubtedly motivated by the Civil Rights Movement of the preceding decade, challenged the notion of one single, standardized English language dialect and asserted, " [t]he claim that any one dialect is unacceptable amounts to an attempt of one social group to exert its dominance over another" (Committee, 1975, p.710-711).The statement concluded that to truly implement and uphold the resolution, educators must have access to the training and experiences that would allow them "to respect diversity and uphold the right of students" (p.711).Today, nearly 50 years since the initial resolution, the need to consider the presence of and pedagogical approach to a variety of English language dialects in the FYC classroom remains as relevant and critical as ever.
Over the last few decades, writing program administrators (WPAs) have tried to bolster FYC course offerings to more accurately reflect the range of student writers (and writing backgrounds) enrolled at U.S. universities.Silva (1994) argued that universities needed to make a wider range of FYC options available to multilingual writers, stating the need for offerings as diverse as "Basic Writing," "ESL Writing," and "Cross Cultural Composition."This call continues to be echoed by CCCC (2020) today.While Silva acknowledged that establishing such a range of FYC courses may feel impractical (or even impossible) given budgetary and labor constraints, universities have started to heed this call.According to Lawrick (2013), a growing number of universities are offering English as a Second Language (ESL) track FYC courses as an equal alternative to (as opposed to a prerequisite for) traditional or "mainstream" FYC courses.These courses are designed for students who identify (or are identified) as English language learners, and are presumably intended to support their specific educational needs and respect their individual linguistic and educational backgrounds, rather than assume a homogenous or standardized language style or skillset.As such, ESL-track FYC classes appear to have the potential to provide a space for English language learners (ELL) to develop skills in American academic writing conventions, while still affirming students' right to their own Englishes.
Yet, these relatively "new" offerings are not without their challenges.Perhaps most notably, Matsuda (2006) argues that L2-specific composition courses can actually serve as sites of "linguistic containment"a way to segregate students who do not conform the standardized English of American academic discourse and keep linguistic difference out of the mainstream classroom (p.647).An additional, though less widely discussed concern, is how the addition of such ESL-specific FYC classes affects institutional placement practices.Long relegated to either a single placement test or students' own registration choices, the FYC placement process is a more complicatedand arguably more fraughtendeavor with an increased number of FYC course offerings, some of which explicitly label themselves as "for" a certain group of students (e.g., ESL, international, etc.).Indeed, in recent years, CCCC has acknowledged that FYC placement is a particular concern for multilingual students.The committee's "Statement on Second Language Writing and Multilingual Writers," initially published in 2001, was revised in 2009, and again in 2020, to include an updated stance on placement practices for multilingual writers.First stating that FYC placement should be based on student proficiency rather than race, nationality, or any other such classifying identity markers, the CCCC statement goes on to support a directed self-placement (DSP) model for L2 writers: a model in which students receive guidance on how to place themselves, and then make their own placement decisions.The literature on the efficacy of DSP for L2 writers is limited, however, with only a handful of studies now beginning to emerge.As such, this literature review will aim to better understand the "problem" of L2 FYC placement.What are the unique challenges of placing L2 writers in FYC courses currently facing both administrators and students?Specifically, what are the implications of the identity labeling inherent in placing students in L2-specific courses?Can DSP offer a possible solution to these challenges as CCCC suggests?

THE CHALLENGE OF PLACEMENT AND IDENTITY LABELING
There exist obvious challenges to accurate and effective FYC placement of multilingual writers.Di Gennaro (2006) notes that, in general, writing ability is a notoriously difficult construct to define, as scholars often disagree on the specific "skills and practices that constitute 'writing'" (Greenberg et al., 1986, cited in di Gennaro, 2006, p. 3), thereby rendering a uniform theoretical model of writing ability for assessment challenging to create.Crusan (2013) further notes that one of the primary challenges facing the writing assessment process is that any construct of writing ability is highly contextualized.Of course, there are numerous writing assessment tasks that are indeed valid and reliable; however, when scores from such tasks on standardized tests are used as the sole criteria for placement, the specific learning needs of students may not be fully aligned with the specific curricular context of their institution's FYC program.As Crusan (2002) argues, "students should be tested in a manner similar to the work of the course," and writing assessments should be closely linked to course curricula (which vary widely across institutions and, in some cases, across class sections) (p.25).
Therefore, the widespread consensus is that that any "credible" form of writing assessment must elicit a writing sample from students (di Gennaro, 2006, p. 3), meaning that a direct writing production task (as opposed to multiple choice, cloze, or other such selected response tasks) is widely considered to be the best method for assessing writing ability.Further, CCCC (2020) argues that FYC placement, while factoring in English language proficiency level, should not rely on writing scores from standardized tests, but ideally on direct assessment of multiple writing samples by institutional advisors or administrators.Yet, as Crusan (2002) claims, "while we know what we should do, we often do not do it" (p.18).It stands to reason that placement practices like those described by CCCC would lead to a more individualized -and thereby, hopefully more "accurate"placement of L2 writers in FYC classes, such an approach is undeniably more time consuming and labor-intensive, which, in turn, renders it more expensive.
One of the more commonly cited challenges of inadequate L2 FYC placement system is the risk of "false positive" or "false negative" placements, in which students are placed in a writing course that is too advanced or not advanced enough, respectively (Shin & Lidster, 2017, p. 358).However, otherperhaps less obviousconcerns about the risks associated with L2 FYC placement have emerged in recent years.While the last few decades have seen a proliferation of the kinds of L2-specific FYC classes called for by Silva (1994) and CCCC (2020), these classes often feature course names that contain identity labels, such as "basic," "ESL," "ELL," "multilingual," or the designation of being "for International Students."On the most basic level, such names may not resonate with students' own perceptions of their identities and needs as student writers.The 2020 CCCC statement itself acknowledges, "[n]ot all students self-identify as 'ESL,' 'multilingual,' or 'second language' students," and while some "may welcome the opportunity to enroll in a writing course designated for multilingual writers for the additional language support…others may prefer to enroll in a mainstream first-year composition course."As such, placement practices that rely solely on any external assessment tool (be that a standardized test score, an in-house placement test, or even an individualized direct assessment of sample writing) has the potential to place students in a class designated for an identity with which they may not affiliate.
What, then, are the potential costs of such a "mismatch" between placement and selfperception of identity?In other words, does this actually matter for student learning?If we subscribe to the belief that learning context and environment plays a role in fostering hospitable circumstances for acquisition, then yes.In Peirce's (1995) view, the field of second language acquisition has often created a false dichotomy between the individual and his/ her social learning context when considering the affective factors enabling language learning.Pushing back on Krashen's (1981) focus on the role of an individual's low affective filter in acquisition, as well as Gardner's (1985) assertion that a positive social context will boost a learner's selfconfidence (and, in turn, their learning), Peirce called for a more integrated approach that actively considered the ways in which social context and power dynamics influence an individual learner's "investment" in learning.As she notes, …theories of the good language learner have been developed on the premise that language learners can choose under what conditions they will interact with members of the target language community and that the language learner's access to the target language community is a function of the learner's motivation.(p.12) Though Peirce focuses largely on how power dynamics affect students' opportunities to acquire language skills outside the classroom, the very nature of an external placement tool (such as a test) suggests that unequal power relations can also affect students' opportunities in the classroom.If, following Peirce, the role of the individual and the role of their social context are inextricably linked in either affording or limiting their "investment" in language learning, then placing students in classroom contexts where they do not feel they belong (or simply do not want to be) certainly runs counter to promoting student desire to learn.

Student Perceptions
From this perspective, the possible implications of placing L2 students in FYC classes they do not believe are "for" students like them are great.Indeed, such "mismatches" are welldocumented.Existing literature reveals one significant overarching challenge in assigning labels to students and/or their courses: a lack of consensus on the very definition of such labels.As Lawrick (2013) noted, scholars of second language writing (SLW) tend to draw a distinction between international ESL and U.S.-resident ESL, designations based largely on the geographic site of students' K-12 educational context and post-college residency plans (p.28).However, Costino and Hyon (2007) highlighted an important potential disparity between how scholars and administrators categorize students and how students identify themselves.In their study, the authors found that students themselves did not appear to factor in their residency status when self-identifying as ESL or ELL, despite the prevalence of the scholarly labels cited by Lawrick (2013) and others, such as "Generation 1.5."Instead, students associated the labels ESL and ELL with English language proficiency exclusively.It should be noted, however, that in her study into the demographics of ESL FYC courses at Purdue University in the southeastern U.S., Lawrick (2013) refuted the argument of some scholars that U.S.-resident ESL students are becoming a larger and larger percentage of ESL FYC classrooms; in her findings, only 9% of the students enrolled in ESL FYC classes were U.S. residents, indicating thatcontrary to Costino and Hyon's findingsstudents may actually factor in their residency status when determining whether an ESL-track FYC course is "for" them.
Further, Marshall (2009) found that students tend not to factor in their language learner status when conceiving of their identity more broadly, with survey data indicating that over 90% of interviewees refer to national identity when describing their identity and background, but only 3.3% referenced English as an additional and/or second language.Therefore, placing a student in a section designated with a label such as "ESL," "ELL" or "multilingual" may emphasize a characteristic that students themselves don't even consider a significant aspect of their identity.Moreover, several studies suggest that linguistically identifying labels are often considered inherently stigmatizing by students.While Costino and Hyon (2007) found that multilingual students viewed the labels "ESL" and "ELL" with mixed connotations (including favorable, negative, and a temporary, evolving state), Ortmeier-Hooper (2008), Marshall (2009), andLawrick (2013) all suggest that students believe the label "ESL" carries negative connotations.In his study, Marshall (2009) set out to understand how multilingual students "negotiate and perform" their linguistic identity in a college setting that has traditionally been comprised of Anglophone students and European ancestry (p.43).His findings revealed that students' day-today lives reflect a "rich multilingualism" and the capacity for complex code-switching, especially between home and school settings (p.48).Marshall juxtaposes these diverse, complex self-identifications with students' perceptions of the term ESL, including the narrow and negative associations some students make regarding remediation, deficiency, and an outsider status at the university.Marshall argued that being labelled as ESL frames students "as deficient, imposing an identity which positions their presence in the university as a problem to be fixed rather than an asset to be welcomed" (p.41).Enrollment or placement in an ESL FYC course thus has implications for students' sense of their linguistic aptitude and autonomyas well as their value to the institution more broadlybefore even entering the classroom.
Yet, when given a choice in their own FYC class assignment, the stigmatizing connotation associated with labels like "ESL" does not necessarily deter students from enrolling in FYC courses labelled as such.In his year-long study of ELL students enrolled in both "ESL" and "mainstream" FYC courses at a larger university in the southeastern U.S., Braine (1996) found that 95% of students who opted to enroll in an FYC course designated as ESL were satisfied with the class they had taken, with a majority of students citing feeling more "comfortable" and "at ease" in the ESL sections (p.97).Among the many reasons given for this comfort, one common theme was a relative lack of self-consciousness about their "accents," making many students more willing to ask questions of the teacher and participate in class discussions.They also noted that their teachers in ESL-track courses spent more one-on-one time with them and were more "'understanding' and 'caring'" (p.97).In contrast, ELL students enrolled in mainstream classes recounted instances of impatience from both peers and teachers and cited feeling less comfortable overall.
Though many institutions would presumably prefer their students feel at ease in their learning environments in the interest of overall student well-being, a sense of comfort in the classroom (or lack thereof) also has potential implications for learning.While there is some debate as to the effect of anxiety on language learning, the general consensus is that language anxiety is detrimental, as opposed to motivational, in language learning (Ellis, 2015).MacIntyre and Gardner (1991) suggested that language anxiety can emerge as a result of negative learning experiences; Bailey (1983) found this was indeed the case.In a study of language learners' diaries, Bailey noted that students' anxiety often rose when they compared themselves to their peers and found their performance inferior.These findings clearly suggest that "false positives"or higher than appropriate placements -can be anxiety-producing for FYC students, but the literature on anxiety also speaks to the impact of classroom environment on learning.The language anxiety generated from classroom performance can inhibit further acquisition.For example, in her study of student error repairs in the language classroom, Sheen (2008) found that students with lower anxiety were more likely to make repairs in response to teacher recasts than were students with higher anxiety, suggesting that anxiety levels can interfere with working memory.In fact, MacIntrye and Gardner (1991) argue that anxiety can interfere with language learning at multiple stages: reception, processing, and production.If, as Braine (1996) found, students who chose to join ESL-designated FYC courses found them more hospitable and comfortable, it stands to reason that student anxiety levels were also lower, rendering students more receptive to learning.
Tellingly, Costino & Hyon (2007) also found that students' need for a sense of "belonging" was central to their selection of FYC course when offered the option of either ESLtrack or mainstream.The authors found that while all multilingual interviewees believed they had enrolled in the best composition class for them (mainstream or multilingual), their preference for the section they chose often related to their perceptions of their own English language abilities; however, students also noted a desire to study with others who were "like" them, which in some instancesreferred specifically to residency status (e.g. with other international students, as opposed to "the Americans").Students' own label affiliations did not appear to affect their section preferences.This documented desire for a sense of community and, to some degree, homophily (even when the shared trait was a sense of otherness), seems to affirm the role a student's FYC class designation may play in their overall learning experience.As Peirce (1995) asserted, motivation is not a static, unchanging trait that learners either do or do not possess; individuals may be highly motivated, but unwilling to fully participate in language exchange in "particular social conditions" (p.19).Similarly, Peirce also noted that learners' social identities are "sites of struggle," in a constant state of evolution and flux in response to the power dynamics at play.As such, the degree to which FYC students feel they belong in their writing classand the degree to which they feel it represents their best interests and abilitiesmay affect the degree of investment they demonstrate in their learning.

Educator Assumptions
L2 placement practices may have consequences for student learning aside from those affective factors stemming from identity labels.Just as linguistically identifying labels carry specific connotations for students, faculty members also associate such labels with specific characteristics and learning needs; in turn, these associations inform pedagogical goals and practices.According to Lawrick (2013), at Purdue University, ESL-track FYC courses differed from the mainstream classes in that enrollment was capped at 15 instead of 20 to allow for more individual instruction and, notably, "more emphasis…on language focus" (p.34).In her study of 161 ELL FYC students at Purdue, Lawrick (2013) found that students self-identified as speakers of 18 different languages.As the author noted, a working knowledge of students' L1 has obvious benefits for instructors as they are able to more easily distinguish between the effect of L1 transfer and a genuine lack of knowledge as to the rhetorical objectives being asked of them (p.35).However, unlike an English as a Foreign Language (EFL) course typical of those taught outside the U.S., an ESL FYC classroom is more often than not linguistically diverse; it is unrealistic to expect that FYC faculty will be familiar with all (or even some) of the languages their students speak.And yet, as Lawrick noted, monolithic labels such as "ESL" or "International Students" homogenize a wildly diverse range of students and linguistic backgrounds, andcriticallycan also lead some educators to falsely assume limited or no prior exposure to English language composition or academic discourse more broadly.In her study, Lawrick found that 71% of students surveyed had received formal instruction in composition in their L1 prior to beginning their studies at Purdue, andstrikingly -88% had received explicit instruction in English-language composition at an L1 learning institution.This disconnect between students' educational background and what faculty understand "ESL" students' background in English language writing (and composition in general) to be, has clear implications for curricular design.According to Lawrick, such faulty assumptions lead faculty to approach FYC curricula from the stance that these students are being "exposed to teaching of rhetoric and composition for the first time in a U.S. FYC setting" (p.49); thus, a universal ESLtrack course that simply emphasizes language instruction is not only inefficient, it also potentially eliminates linguistic and rhetorical diversity in the classroom, making the "myth of homogeneity" Matsuda (2006) posited a self-fulfilling prophecy.
These assumptions may also unconsciously impact the way educators themselves interact with students and student writing in FYC courses.In their study on instructor feedback on student writing, Case, Williams, and Xu (2013) found that the self-identified linguistic identity of ELL students impacted the content, form, and amount of feedback the instructor gave.Though all of the instructors surveyed by the authors believed that they responded to their students "as individuals first" (p.95), and that the "category" or linguistic identity of a student did not impact the feedback they gave, quantitative analysis of their categorized comments indicated otherwise.Comparing the feedback instructors gave to basic writers, Generation 1.5 writers, and international student writers in a mixed classroom in a community college context, the authors found that instructors gave the least feedback overall to U.S. resident basic writers, and the most to Generation 1.5 writers.This was true of feedback that took the form of criticism, praise, and suggestions.International students received the second most feedback overall, but the most feedback in relation to their ideas (as opposed to grammar and academic form).
These findings are, on a superficial level, somewhat encouraging as they suggest that simply being identified as "international" does not ensure a lopsided focus on enforcing a standardized English language grammar or dialect.However, the data, coupled with transcribed interviews with instructors, revealed ingrained beliefs about students' abilities, needs, and prior educationin other words, instructors see students not as individuals, but rather as the "homogenous" group that Lawrick (2013) cautions against.In an interview, one instructor said "I do believe that the international students . . .have been so trained on form and organization of an essay that they actually usually have a pretty strong topic sentence. . . .Whereas with the Generation 1.5 students… [they] are often just throwing stuff on the page" (p.97).Additionally, the authors found that instructors' feedback was linked to perceived efforts of the students in the class, with those perceived as working harder receiving feedback in a more mitigated form (i.e., receiving less criticism) and those perceived as less hard working receiving more direct critique.This finding is potentially troubling (and merits further investigation) given the subjectivity involved in assessing effort, as well as the different ways in which effort may be performed or perceived interculturally.While findings pertaining to faculty perceptions of students based on identity labelling certainly speak to the need for better L2 pedagogical training, they also lay bare the stakes of placement into such classes for students who may have no say in their placement and, therefore, no say in the label (and educational assumptions) being assigned to them.

AN EMERGING ALTERNATIVE: THE POSSIBILITY OF A DSP MODEL
Given the potential costs to student learning, what is the path forward for FYC placement practices as they pertain to identity-labelling writing classes?As previously noted, since its first iteration 20 years ago, the CCCC position statement on second language writing has been revised to include an explicit endorsement of DSP as a preferred placement practice for multilingual writers (CCCC, 2020).Notably, the statement also still stresses that English language proficiency, as measured by direct assessment of multiple writing samples, should be a criterium for FYC placement of multilingual writers.
CCCC's advocacy for DSP is, to some extent, logical.Effective placement of multilingual students is high-stakes; when multilingual students perceive their placement as arbitrary, unfair, or remediating, their motivation, self-efficacy, and overall engagement suffer (Ortmeier-Hooper, 2008;Saenkhum, 2016).Yet, as noted above, placement is labor-intensive (and thus expensive) to do well (Silva, 1994;Ferris & Lombardi, 2020), leading to a frequent reliance on standardized test scores to place multilingual students in writing courses.On the other hand, simply allowing L2 writing students to select their own course runs its own risks.As CCCC noted in 2014, "self-placement without any direction may become merely a right to fail" (qtd. in Ferris & Lombardi, 2020).DSP therefore may offer the "best of both worlds" when it comes to L2 FYC placement, making room for both external measures of student performance and students' internal wishes and self-perceptions.

Defining Directed Self-Placement (DSP)
In their seminal introduction to DSP, Royer and Gilles (1998) presented their innovative approach to writing placement at Grand Valley State University for L1 English writing students as a way of decentering teachers and administrators in the placement process: "we began to envision students at the center, and for the first time we turned our attention to the people who knew our students best: the students themselves" (p.61).Their pilot was borne out of their realization that ACT scores alone -their institution's existing metric by which students were placed in the appropriate level of FYCcould not accurately predict which students would struggle, which would fail, and which would withdraw from the required FYC course.Royer and Giles responded by first integrating a 10-minute speech detailing both their required FYC course (ENG 105) and their optional preparatory writing course (ENGL 098), explaining the differences between and curricula of the two; students then completed a self-assessment questionnaire evaluating their writing skills.Royer and Giles found that after completing the self-assessment, 22% of the students who would have been placed in the required course according to their ACT scores opted to take the optional preparatory course first.Surveys conducted after the courses suggested that most of the 22% opted in because, after hearing the descriptions, the preparatory course just "felt right" (pp.61-62).While their original study (1998) stopped short of proclaiming their DSP experiment an undeniable success by all metrics, Royer and Gilles did claim that DSP worked at their school, noting that they "continue to locate hundreds of students each year that feel they need additional help with their writing, and [they] do it very efficiently and on terms the students understand and appreciate" (p.64).
Since Royer and Gilles' introduction, DSP for L1 writers has gained numerous proponents among writing program administrators (WPAs), both for its potential to lower the costs and security concerns of placement testing and, perhaps more notably, for its ability to put students themselves at the center of their own educational decision-making.Royer and Gilles, as well as more recent advocates of DSP, cite its potential to promote student agency and intrinsic motivation.Crusan (2010) argues, "DSP sends a powerful message to students because it affords them some agency and includes students' self-evaluations as an essential component in the placement decision.DSP tells students that they are important" (p.778).Others suggest that it prevents resistance inherently bred by some students in response to the state of being placed (Blakesley, 2002) and eliminates the problematic byproducts of "negative" labelling (White, 2008).With its increased popularity, DSP has also taken many different forms, suggesting adaptability to institutional needs.By definition, DSP involves institutional guidance (the "direction" referenced in directed self-placement) of students, who ultimately make their own decisions as to their FYC course assignment.In practice, this guidance can come in the form of faculty presentations, PowerPoints, brochures, one-on-one advising meetings, student surveys, or even self-administered testing.In some instances, student surveys or questionnaires are shared with faculty to help them gauge how best to support their students, regardless of the students' placement choices.

DSP in the L2 Context
Given the adaptability and general enthusiasm for DSP practices among WPAs, the CCCC endorsement of DSP for L2 writers is perhaps unsurprising.And, to be sure, its potential to avoid the negative impact on learning that traditional placement practices might incur is intriguing.Yet, surprisingly little research has been done into DSP's accuracy and effectiveness for L2 writers.One of the few scholarly journal articles to feature empirical research on the accuracy and efficacy of a DSP diagnostic assessment model for L2 first year writers, Ferris et al. (2017) studied the relationship between L2 students' self-placement scores and their performance on traditional forms of assessment (placement exams and standardized tests, such as the SAT and TOEFL).The study sample consisted of 1067 L2 university students at a large U.S. university in Northern California with four distinct developmental writing course levels.The university, which had a rapidly growing multilingual and L2 student body, had recently transitioned from a statewide standardized placement test to an examination that was developed and administered "in-house." For the purposes of their study, the researchers asked L2 writers to complete a selfevaluation survey, in addition to the new localized placement test (the English Language Proficiency Exam, or ELPE).The authors found that the average score on the self-assessment was only slightly higher than the average score of the ELPE, and most students (79%) were able to place themselves within one level of the ELPE's outcomes, while 20% of students were 2-3 levels off.In general students were more likely (39%) to place themselves higher than the test, as opposed to lower (23%), and 34% of students placed themselves in the same level as the ELPE exam.A compelling finding of this study pertains to the relationship between the accuracy of self-evaluation scores and level; students with the lowest ELPE score were the most likely to have a "mismatch" between their self-evaluation scores and the ELPE scores and 33% of students placed in Level 1 (the lowest level) by the ELPE believed they should be placed 2-3 levels higher.By contrast, students who were placed in Level 4 (the highest level) were the most likely to self-evaluate themselves lower than their placement.Interestingly, students who scored in the middle -Levels 2 and 3were much more likely (85-87%) to place themselves in either matching levels or within one level of the ELPE score.Notably, the authors found that, overall, standardized test scores did not significantly correlate with either ELPE or self-assessment scores, with only the SAT reading score (but not writing), and TOEFL writing and speaking scores (but not overall score) correlating with the ELPE scores at a statistically significant rate, thereby reinforcing the increasingly common assertion that many commonly used standardized test scores are not an effective means of L2 writing class placement.
While Ferris et al.'s study is, indeed, promising, the authors themselves came to the conclusion that their study did not prove that "self-assessment alone would work for effective placement of students in [their] four-level L2 writing program," but they also noted it "did not demonstrate that incorporation of such student input would be a complete disaster, either" (p.8).Further, the limited context in which the modified DSP model was implemented and studied naturally limits the generalizability of their findings.Indeed, Saenkhum's (2016) qualitative study of student experiences with a comparable placement model at Arizona State University (ASU) found that among her small sample (n = 7), two students were unsatisfied with their placement decisions, and others expressed concerns about misinformation or inadequate guidance from their placement advisors.In one instance, a student was encouraged to enroll in a multilingual FYC section because it would be less work (p.53), an inaccurate representation of the course that harkens back to faculty and administrators' misconceptions about FYC courses and the students who take them.Saenkhum's study, while much smaller in scale, reveals how critical accurate and informed guidance is for a DSP model to work.The author, however, concluded that DSP is still a preferred model of placement for L2 writers, calling for comprehensive teacher and advisor training in the needs of L2 writers specifically.
Following the 2017 study by Ferris et al., Ferris and Lombardi (2020) conducted a study of a "collaborative" placement model.Acknowledging Ferris et al.'s previous findings that standardized test scores were not accurate placement tools for FYC courses, Ferris and Lombardi implemented a "limited model" of DSP at the same four-year institution studied in 2017.In this iteration of the study, L2 students completed a "traditional" in-house writing placement exam, and those students with borderline scores were given the option to move one level up or one level down.Placement decisions were made remotely and weeks in advance of the start of the semester, so limited opportunities for advising were available.Of the students who received borderline scores (n=65) and were given the option to move, 39 chose to move up a level, six chose to move down, and 20 opted to stay at the level into which they had placed.At the end of the term, Ferris and Lombardi found that while the final grades of the limited DSP pilot group were lower than those of a control group (a difference that was statistically significant at the .05level), the difference on a practical level was relatively small, with the pilot group averaging a Band the control group averaging a B. The authors noted no difference in failure rates and concluded "no real detrimental effect on the pilot group students of being allowed to have input into their course placement." The authors also conducted a survey on student attitudes about their placement experience after the conclusion of the course and found that students from the pilot group were more likely to indicate that the level of the class they took "was perfect for [them]" (77%) than students in the control group (58%).The authors pointed out that previous students in the same program who were placed via "traditional" placement methods "had felt frustrated, during and after the fact, about their placement outcomes."Ferris and Lombardi's study, therefore, suggests that a collaborative model in which students have the ability to make small adjustments to their placement level might be a more promising option than DSP as originally conceived of by Royer and Gilles (1998).By allowing students to slightly adjust their placement, but not entirely place themselves, some of the variability caused by quality of advisement, availability of information, and student self-perceptions can be accounted for.

CONCLUSION
The placement of multilingual writers in FYC classes has undeniable implications for both student attrition and learning outcomes, but also for L2 students' sense of identity, agency, and investment in their writing classes and their university experience at large.The nascent research into the potential of DSP to effectively serve the burgeoning number of multilingual writers at U.S. universities is promising, offering preliminary support for the CCCC endorsement of DSP as the placement practice of choice for L2 writers.However, in order to fully measure the long-term efficacy of L2 DSP, further research is required, especially across a broader range of institutions and writing program types.At present, the limited data makes it impossible to gauge the extent to which findings on DSP and collaborative placement models are generalizable.The currently available studies on U.S. institutions are based at large, west coast institutions with sizable multicultural and multilingual populations, as well internationallyrecognized Applied Linguists (Ferris and Matsuda) housed within the Writing Program faculty.Therefore, findings (particularly regarding student satisfaction) may well be impacted by the institutions' geographic location and demographics, with local and institutional socio-cultural norms influencing perceptions surrounding the "ESL" identity.Furthermore, the available studies were conducted at institutions with their own in-house writing placement tests, which were used in conjunction with student self-assessment and advisement to determine placement.
Similarly, research into the impact of FYC placement on L2 learning outcomes would benefit from an expansion in scope.One notable limitation of much of the surveyed literature on student perceptions of and satisfaction with identity labelling FYC courses was available data.In some instances (Cavazos, 2019;Costino & Hyon, 2007;Ortmeier-Hooper, 2008;Marshall, 2009), small data sets or a very limited number of participants inherently call the replicability and generalizability of the findings into question.Additionally, some studies (Costino & Hyon, 2007;Ortmeier-Hooper, 2008) relied on data and interviews collected from the authors' own classes, thus introducing a significant potential for bias.Finally, nearly all of the literature surveyed relied heavily on interviews conducted during or immediately after the semester analyzed, as well asin some cases -authors' impressions of the classroom exercises and assignments.The relative lack of true quantifiable data (Case, Williams, & Xu's 2013 study being the exception), and/or transcripts of classroom interactions from ESL-track FYC classrooms present a challenge to making firm claims about the degree to which the identity labels associated with some FYC courses impact students and their investment in learning.
Perhaps most importantly, no research currently exists that truly factors in both sides of this single placement coin.Simply stated, research that focuses solely on student outcomes (as measured by FYC grades, test scores, etc.) or solely on student perceptions of their placement risks painting an inaccurate picture of overall placement (and thereby, student) success.Future research should strive to attain a holistic picture of FYC placement efficacy by first critically examining how we (as students, teachers, administrators, and institutions) are defining "successful" FYC outcomes.Grades and student satisfaction are indeed important variables, but so are longer-term outcomes such as student retention, graduation rates, and overall academic performance at current and future institutions.As Crusan (2002) notes, studies that use final class grades as a sign of validity of a placement practice "would likely fail to account for a significant source of variation, that of instructor grading variability" (p.23).
Honoring the complexity of measuring successful outcomes will likely require more mixed methods studies like those conducted by Ferris et al. (2017), andFerris andLombardi (2020).Moreover, longitudinal studies that track student outcomes and their motivation, selfefficacy, and engagement levels would offer valuable insights into the effect of FYC placement on L2 students' overall learning and integration into their university communities.By acknowledging that FYC placement requires consideration of both accurate assessment of student writing and the potential impact on the affective factors that contribute to L2 acquisition, researchers have the opportunity to develop best practices to place, teach, and support an increasingly large portion of the U.S. student population.