As we’ve talked about on several occasions, this myth is dealing with prescriptive vs. descriptive grammar. Prescriptive grammar is an imposition of the rules (and I use that term loosely), some of which are arbitrary. Descriptive, on the other hand, is how we use language is a more natural sense, is more context-specific, and takes into account dialect and register. People use English differently from each other, different from the received standard, and that’s fine. It contributes to the richness and variety of English, which, historically speaking, is a pretty varied language as it is.
When we get into the questions posed at the beginning of the essay, we run into some issues with definitions. In what context do we mean “grammar,” and more specifically, how are we defining “bad” grammar? Further, we usually have some idea of what we’re considering and the particular standards we use to justify our claims of labeling an utterance as “bad grammar,” but what aren’t we considering when we use these terms?
In the first three examples beginning on p. 94, we have three descriptive sentences and their prescriptive counterparts. And while the descriptive sentences do not meet the standards of prescriptive grammar, consider the communicative act itself—is there any ambiguous meaning or difficulty in understanding what these descriptive sentences are communicating? I’d be hard-pressed to believe that there is.
Milroy, the essay’s author, mentions codification on p. 95, which is a standardizing of a language. And there are certainly some benefits to having a standard, or at least an idea of a standard. It helps us determine what constitutes good academic writing. It helps us with having an understanding of how to communicate in highly formal settings. By knowing the standards, we’re better able to constructively criticize it. But imposing a standard renders anything below that standard as something lesser, and, usually, as something not good enough in this specific context. And telling a person that his/her language isn’t good enough certainly has its downfalls.
Prescriptivism is largely based on “cultural, [social], and political pressures” (Milroy 96). Hence why some standards are based on Latin, even though Latin grammar and English grammar function differently in many ways. As we’ve talked about before, Latin was a revered language, and to try to give English a similar reverence, those writing the rules based many English standards on Latin.
Milroy mentions that prescriptivism is something of a “linguistic etiquette,” and of course, there are certain contexts in which a heightened etiquette is a good idea (96). Writing one’s thesis, speaking to a judge while one is on trial, speaking to a job interviewer, etc. If, however, each speaker of English spoke with sentences like this during every communicative occurrence in which we engaged, not only would we sound insufferably stuffy, but the language would be stripped of its variety and color, and the diversity with which we use the language would severely diminish, and possibly disappear. And again, what do we stand to lose if everyone use the highest form of language etiquette all the time?
Towards the bottom of p. 98, Milroy makes an excellent point. “A grammatical sentence…follows the rules of the language as it is used by its native speakers” (emphasis mine). Even if we can’t name the linguistic phenomena or identify certain parts of a sentence like the direct object, most of us still produce utterances that are understandable, and in that case, grammatical. (There are exceptions—children, those who have suffered strokes or are aphasia, etc.). Those instances notwithstanding, ungrammatical sentences like “That book is my!” or “I didn’t went to school yesterday” are not going to occur naturally in the language. Our unconscious understanding of grammar—our mental grammar—provides this understanding, whether we’re aware of it or not.
Towards the bottom of p. 99, Milroy makes another point, which is quite interesting. “[L]inguistic prescriptivism [is] the last open door to discrimination.” And that happens quite often, and I’m sure we’ve all made judgments about a person based on his/her grammar. When we’re in positions to pass judgments about one’s use of grammar in certain contexts, like if we’re English teachers reading students’ essays, we’re probably going to bring an elevated sense of grammatical use to the table. However, in most cases, passing judgment about one’s grammar and how it reflects a person is rather uncalled for, and may say more about us than it does about the other person. Which is worse—“bad” grammar or judgmental pedantry?
Too, if we’re talking about grammar, we also have to consider the medium, and be aware that there are some big differences between writing and speaking. Speaking, often, is more ephemeral and fleeting; there’s no chance to revise a spoken utterance. Sure, you can go back and correct yourself, or acknowledge that you misspoke, but that doesn’t change that a misspeaking is already out and the listener heard it. Speech is more prone to “slovenliness” (and again, I’m using that term loosely) if we pay attention. “Um,” “uh,” “er,” “like,” “y’know,” and other various verbal tics, nervous throat-clearing, unfinished sentences, etc. are rampant in most people’s speech. And like commenting on someone’s grammar, there’s a certain context in which telling a person that she says “um” too much may be received as constructive criticism and not unnecessary judgment.
To wrap this up, definitions of “bad grammar” and “good grammar” are quite difficult to maintain when we take some necessary facets of language use into context. How people define these terms vary in different geographic and dialectical regions, and to remain steadfast to a definition of “bad grammar” is to impose a standard that is not as black-and-white as we might think, and this catchall term is exceedingly difficult to maintain in the grand perspective of the use of English.