As the government begins its crackdown on essay mill websites, it’s easy to see just how much pressure students are under to get top grades for their coursework these days. But writing a high-scoring paper doesn’t need to be complicated. We spoke to experts to get some simple techniques that will raise your writing game.
Tim Squirrell is a PhD student at the University of Edinburgh, and is teaching for the first time this year. When he was asked to deliver sessions on the art of essay-writing, he decided to publish a comprehensive (and brilliant) blog on the topic, offering wisdom gleaned from turning out two or three essays a week for his own undergraduate degree.
“There is a knack to it,” he says. “It took me until my second or third year at Cambridge to work it out. No one tells you how to put together an argument and push yourself from a 60 to a 70, but once you to get grips with how you’re meant to construct them, it’s simple.”
'I felt guilty when I got my results': your stories of buying essays | Guardian readers and Sarah Marsh
The goal of writing any essay is to show that you can think critically about the material at hand (whatever it may be). This means going beyond regurgitating what you’ve read; if you’re just repeating other people’s arguments, you’re never going to trouble the upper end of the marking scale.
“You need to be using your higher cognitive abilities,” says Bryan Greetham, author of the bestselling How to Write Better Essays. “You’re not just showing understanding and recall, but analysing and synthesising ideas from different sources, then critically evaluating them. That’s where the marks lie.”
But what does critical evaluation actually look like? According to Squirrell, it’s simple: you need to “poke holes” in the texts you’re exploring and work out the ways in which “the authors aren’t perfect”.
“That can be an intimidating idea,” he says. “You’re reading something that someone has probably spent their career studying, so how can you, as an undergraduate, critique it?
“The answer is that you’re not going to discover some gaping flaw in Foucault’s History of Sexuality Volume 3, but you are going to be able to say: ‘There are issues with these certain accounts, here is how you might resolve those’. That’s the difference between a 60-something essay and a 70-something essay.”
Critique your own arguments
Once you’ve cast a critical eye over the texts, you should turn it back on your own arguments. This may feel like going against the grain of what you’ve learned about writing academic essays, but it’s the key to drawing out developed points.
“We’re taught at an early age to present both sides of the argument,” Squirrell continues. “Then you get to university and you’re told to present one side of the argument and sustain it throughout the piece. But that’s not quite it: you need to figure out what the strongest objections to your own argument would be. Write them and try to respond to them, so you become aware of flaws in your reasoning. Every argument has its limits and if you can try and explore those, the markers will often reward that.”
Applying to university? It's time to narrow your choices down to two
Fine, use Wikipedia then
The use of Wikipedia for research is a controversial topic among academics, with many advising their students to stay away from the site altogether.
“I genuinely disagree,” says Squirrell. “Those on the other side say that you can’t know who has written it, what they had in mind, what their biases are. But if you’re just trying to get a handle on a subject, or you want to find a scattering of secondary sources, it can be quite useful. I would only recommend it as either a primer or a last resort, but it does have its place.”
Focus your reading
Reading lists can be a hindrance as well as a help. They should be your first port of call for guidance, but they aren’t to-do lists. A book may be listed, but that doesn’t mean you need to absorb the whole thing.
Squirrell advises reading the introduction and conclusion and a relevant chapter but no more. “Otherwise you won’t actually get anything out of it because you’re trying to plough your way through a 300-page monograph,” he says.
You also need to store the information you’re gathering in a helpful, systematic way. Bryan Greetham recommends a digital update of his old-school “project box” approach.
“I have a box to catch all of those small things – a figure, a quotation, something interesting someone says – I’ll write them down and put them in the box so I don’t lose them. Then when I come to write, I have all of my material.”
There are a plenty of online offerings to help with this, such as the project management app Scrivener and referencing tool Zotero, and, for the procrastinators, there are productivity programmes like Self Control, which allow users to block certain websites from their computers for a set period.
Essays for sale: the booming online industry in writing academic work to order
Look beyond the reading list
“This is comparatively easy to do,” says Squirrell. “Look at the citations used in the text, put them in Google Scholar, read the abstracts and decide whether they’re worth reading. Then you can look on Google Scholar at other papers that have cited the work you’re writing about – some of those will be useful. But quality matters more than quantity.”
And finally, the introduction
The old trick of dealing with your introduction last is common knowledge, but it seems few have really mastered the art of writing an effective opener.
“Introductions are the easiest things in the world to get right and nobody does it properly,” Squirrel says. “It should be ‘Here is the argument I am going to make, I am going to substantiate this with three or four strands of argumentation, drawing upon these theorists, who say these things, and I will conclude with some thoughts on this area and how it might clarify our understanding of this phenomenon.’ You should be able to encapsulate it in 100 words or so. That’s literally it.”
Keep up with the latest on Guardian Students: follow us on Twitter at @GdnStudents – and become a member to receive exclusive benefits and our weekly newsletter.
Complexity theory: what, why and how?
The history of complexity goes back to antiquity, Greece, China and beyond. Complexity was mostly thought of as being characterized by somehow going beyond what human minds can handle. The idea of complexity as a coherent scientific concept is quite new and dates back to the early 1960s with efforts to define computational complexity in the development of modern computers. The issue of computational complexity arose naturally with the need to measure the resources, time and memory as a function of the size and nature of the input problem to solve. The concept of complexity is also attached to the impossibility theorems of Godel and other mathematicians developed in the 1930s and later dealing with the sizes of axioms for logical theories, and with numbers of ways to satisfy such axioms. The idea that complexity might also be related to information content was also developed with the notion of algorithmic information content as the length of a shortest program to represent a given system. A new wave of interest spurred with the involvement of physicists who discovered in the early 1980s that complexity might offer a general guideline to understand the physical, biological as well as social worlds [Wolfram, 2002].
The study of out-of-equilibrium dynamics (e.g. dynamical phase transitions) and of heterogeneous systems (e.g. glasses, rocks) has progressively made popular in physics and then in its sisters branches (geology, biology, etc.) the concept of complex systems and the importance of systemic approaches: systems with a large number of mutually interacting parts, often open to their environment, self-organize their internal structure and their dynamics with novel and sometimes surprising macroscopic "emergent" properties.
The complex system approach, which involves seeing inter-connections and relationships, i.e., the whole picture as well as the component parts, is nowadays pervasive in modern control of engineering devices and business management. It is also plays an increasing role in most of the scientific disciplines, including biology (biological networks, ecology, evolution, origin of life, immunology, neurobiology, molecular biology, etc), geology (plate-tectonics, earthquakes and volcanoes, erosion and landscapes, climate and weather, environment, etc.), economy and social sciences (including cognition, distributed learning, interacting agents, etc.). There is a growing recognition that progress in most of these disciplines, in many of the pressing issues for our future welfare as well as for the management of our everyday life will need such a systemic complex system and multidisciplinary approach.
A central property of a complex system is the possible occurrence of coherent large-scale collective behaviors with a very rich structure, resulting from the repeated non-linear interactions among its constituents: the whole turns out to be much more than the sum of its parts. It is widely believed that most of these systems are not amenable to mathematical, analytic descriptions and can only be explored by means of "numerical experiments." In the context of the mathematics of algorithmic complexity [Chaitin, 1987], most complex systems are said to be computationally irreducible, i.e., the only way to decide about their evolution is to actually let them evolve in time. Accordingly, the future time evolution of complex systems would be inherently unpredictable. This unpredictability does not prevent however the application of the scientific method for the prediction of novel. In contrast, it refers to the frustration to satisfy the curiosity, strengthened by the anguish and hope that humans have always projected on their future. Is modern science really putting out of reach the grail of predicting (some of) the future evolution of complex systems?
This view has recently been defended persuasively in concrete prediction applications, such as the socially important issue of earthquake prediction (see the contributions in [Nature debate, 1999]. In addition to the persistent failures at reaching a reliable earthquake predictive scheme, this view is rooted theoretically in the analogy between earthquakes and self-organized criticality [Bak, 1996]. In this "fractal" framework, there is no characteristic scale and the power law distribution of sizes reflects the fact that the large earthquakes are nothing but small earthquakes that did not stop. They are thus unpredictable because their nucleation is not different from that of
the multitude of small earthquakes which obviously cannot be all predicted.
Does this really hold for all features of complex systems? Take our personal life. We are not really interested in knowing in advance at what time we will go to a given store or drive in a highway. We are much more interested in forecasting the major bifurcations ahead of us, involving the few important things, like health, love and work that count for our happiness. Similarly, predicting the detailed evolution of complex systems has no real value and the fact that we are taught that it is out of reach from a fundamental point of view does not exclude the more interesting possibility to predict phases of evolutions of complex systems that really count.
It turns out that most complex systems around us do exhibit rare and sudden transitions, that occur over time intervals that are short compared to the characteristic time scales of their posterior evolution. Such extreme events express more than anything else the underlying "forces" usually hidden by almost perfect balance and thus provide the potential for a better scientific understanding of complex systems. These crises have fundamental societal impacts and range from large natural catastrophes, catastrophic events of environmental degradation, to the failure of engineering structures, crashes in the stock market, social unrest leading to large-scale strikes and upheaval, economic drawdowns on national and global scales, regional power blackouts, traffic gridlock, diseases and epidemics, etc. It is essential to realize that the long-term behavior of these complex systems is often controlled in large part by these rare catastrophic events: the universe was probably born during an extreme explosion (the "big-bang"); the nucleosynthesis of
most important atomic elements constituting our matter results from the colossal explosion of supernovae; the largest earthquake in California repeating about once every two centuries accounts for a significant fraction of the total tectonic deformation; landscapes are more shaped by the "millennium" flood that moves large boulders rather than the action of all other eroding agents; the largest volcanic eruptions lead to major topographic changes as well as severe climatic disruptions; evolution is characterized by phases of quasi-statis interrupted by episodic bursts of activity and destruction; financial crashes can destroy in an instant trillions of dollars; political crises and revolutions shape the long-term geopolitical landscape; even our personal life is shaped on the long run by a few key decisions and happenings.
The outstanding scientific question is thus how such large-scale patterns of catastrophic nature might evolve from a series of interactions on the smallest and increasingly larger scales. In complex systems, it has been found that the organization of spatial and temporal correlations do not stem, in general, from a nucleation phase diffusing across the system. It results rather from a progressive and more global cooperative process occurring over the whole system by repetitive interactions. An instance would be the many occurrences of simultaneous scientific and technical discoveries signaling the global nature of the maturing process.
Standard models and simulations of scenarios of extreme events are subject to numerous sources of error, each of which may have a negative impact on the validity of the predictions [Karplus, 1992]. Some of the uncertainties are under control in the modeling process; they usually involve trade-offs between a more faithful description and manageable calculations. Other sources of errors are beyond control as they are inherent in the modeling methodology of the specific disciplines. The two known strategies for modeling are both limited in this respect: analytical theoretical predictions are out of reach for most complex problems. Brute force numerical resolution of the equations (when they are known) or of scenarios is reliable in the "center of the distribution," i.e., in the regime far from the extremes where good statistics can be accumulated. Crises are extreme events that occur rarely, albeit with extraordinary impact, and are thus completely under-sampled and thus poorly constrained. Even the introduction of teraflop (or even pentaflops in the futur) supercomputers does not change qualitatively this fundamental limitation.
Recent developments suggest that non-traditional approaches, based on the concepts and methods of statistical and nonlinear physics coupled with ideas and tools from computation intelligence could provide novel methods in complexity to direct the numerical resolution of more realistic models and the identification of relevant signatures of impending catastrophes. Enriching the concept of self-organizing criticality, the predictability of crises would then rely on the fact that they are fundamentally outliers, e.g. large earthquakes are not scaled-up versions of small earthquakes but the result of specific collective amplifying mechanisms. Similarly, financial crashes do not belong to the same distribution as smaller market moves [Johansen and Sornette, 2002]. To address this challenge posed by the identification and modeling of such outliers, the available theoretical tools comprise in particular bifurcation and catastrophe theories, dynamical critical phenomena and the renormalization group, nonlinear dynamical systems, and the theory of partially (spontaneously or not) broken symmetries. Some encouraging results have been gathered on concrete problems, such as the prediction of the failure of complex engineering structures, the detection of precursors to stock market crashes and of human parturition, with exciting potential for earthquakes [Sornette, 2002].
We now proceed to present how these ideas have been explored and exploited in the financial and social spheres.
Questions and lessons from the stock Market Crach of October 1987
From the opening on October 14, 1987 through the market close on October 19, major indexes of market valuation in the United States declined by 30 percent or more. Furthermore, all major world markets declined substantially in the month, which is itself an exceptional fact that contrasts with the usual modest correlations of returns across countries and the fact that stock markets around the world are amazingly diverse in their organization (Barro et al., 1989, White, 1996).
The crash of October, 1987 and its black monday on October, 19 remains one of the most striking drops ever seen on stock markets, both by its overwhelming amplitude and its encompassing sweep over most markets worldwide. It was preceded by a remarkably strong "bull" regime epitomized by the following quote from Wall Street Journal, August 26, 1987, the day after the 1987 market peak: "In a market like this, every story is a positive one. Any news is good news. It's pretty much taken for granted now that the market is going to go up." This and other indicators, such as the implied volatility, indicate that investors were thus mostly unaware of the forthcoming risk happenings (Grant, 1990).
A lot of work has been carried out to unravel the origin(s) of the crash, notably in the properties of trading and the structure of markets; however, no clear cause has been singled out. It is noteworthy that the strong market decline during October 1987 followed what for many countries had been an unprecedented market increase during the first nine months of the year and even before. In the US market, for instance, stock prices advanced 31.4 % over those nine months. Some commentators have suggested that the real cause of October's decline was that over-inflated prices generated a speculative bubble during the earlier period. The main explanations that have been invoked include computer trading, derivative securities, lack of liquidity, trade and budget deficits, overvaluation, etc. Other cited potential causes involve the auction system itself, the presence or absence of limits on price movements, regulated margin requirements, off-market and
off-hours trading (continuous auction and automated quotations), the presence or absence of floor brokers who conduct trades but are not permitted to invest on their own account, the extent of trading in the cash market versus the forward market, the identity of traders (i.e. institutions such as banks or specialized trading firms), the significance of transaction taxes, etc.
More rigorous and systematic analyses on univariate associations and multiple regressions of these various factors conclude that it is not at all clear what caused the crash (Barro et al., 1989). The most precise statement, albeit somewhat self-referential, is that the most statistically significant explanatory variable in the October crash can be ascribed to the normal response of each country's stock market to a worldwide market motion. A world market index was thus constructed by equally weighting the local currency indexes of the 23 major industrial countries mentioned above and normalized to 100 on september 30, 1987 (Barro et al., 1989). It fell to 73.6 by October 30. The important result is that it was found to be statistically related to monthly returns in every country during the period from the beginning of 1981 until the month before the crash, albeit with a wildly varying magnitude of the responses across countries. This correlation was found to swamp the influence of the institutional market characteristics. This signals the possible existence of a subtle but nonetheless influential world-wide cooperativity (contagion) at times preceding crashes.
According to this view, the crash of a given country's market is "explained" as driven by or responding to a world-wide market crash. But what is the origin of the world market crash itself? Is it the result of the superposition of all individual national crashes? This line of argument is reminiscent of the infamous chicken-and-egg problem and we believe that it does not reply adequately to the question posed in the title.
Questions and lessons from the stock Market Crash of March-April 2002 on the Nasdaq index
With the low of 3227 on April, 17, 2000, the Nasdaq Composite Index lost over 37% of its all-time high of 5133 reached on the 10th of March 2000. The Nasdaq Composite consists mainly of stock related to the so-called "New Economy," i.e., the Internet, software, computer hardware, telecommunication and so on. A main characteristic of these companies is that their price-earning-ratios (P/E's), and even more so their price-dividend-ratios, often came in three digits . Opposed to this, so-called "Old Economy" companies, such as Ford, General Motors and Daimler-Chrysler, had P/E's of the order of 10. The difference between Old Economy and New Economy stocks was thus the expectation of future earnings: investors expected an enormous increase in for example the sale of Internet and computer related products rather than in car sales and were hence more willing to invest in Cisco rather than in Ford notwithstanding the fact that the earning-per-share of the former is much smaller than for the later.
In the standard fundamental valuation formula, in which the expected return of a company is the sum of the dividend return and of the growth rate, New Economy companies are supposed to compensate for their lack of present earnings by a fantastic potential growth. In essence, this means that the bull market observed in the Nasdaq until the end of the first quarter of 2000 was fueled by expectations of increasing future earnings rather than economic fundamentals: the price-to-dividend ratio for a company such as Lucent Technologies (LU) with a capitalization of over 300 billions prior to its crash on the 5 January 2000 was over 900 which means that you got a higher return on your checking account(!) unless the price of the stock increases. Opposed to this, an Old Economy company such as DaimlerChrysler gives a return which is more than thirty times higher. Nevertheless, the shares of Lucent Technologies rose by more than 40% during 1999 whereas the share of DaimlerChrysler declined by more than 40% in the same period.
The generic scenario
This makes clear that it is the expectation of future earnings rather than present economic reality that motivates the average investor, with the potential for the creation of a speculative bubble. History provides many examples of bubbles driven by unrealistic expectations of future earnings followed by crashes. The same basic ingredients are found repeatedly. Markets go through a series of stages, beginning with a market or sector that is successful, with strong fundamentals. Credit expands, and money flows more easily. (Near the peak of Japan's bubble in 1990, Japan's banks were lending money for real estate purchases at more than the value of the property, expecting the value to rise quickly.) As more money is available, prices rise. More investors are drawn in, and expectations for quick profits rise. The bubble expands, and then bursts. In other words, fueled by initially well-founded economic fundamentals, investors develop a self-fulfilling enthusiasm by an imitative process or crowd behavior that leads to the building of castles in the air, to paraphrase Malkiel (1990).
Furthermore, the causes of the crashes on the US markets in 1929, 1987, 1998 and in 2000 belongs to the same category, the difference being mainly in which sector the bubble was created: in 1929, it was utilities; in 1987, the bubble was supported by a general deregulation of the market with many new private investors entering the market with very high expectations with respect to the profit they would make; in 1998, it was an enormous expectation to the investment opportunities in Russia that collapsed; before 2000, it was the extremely high expectations to the Internet, telecommunication etc. that fueled the bubble.
Positive feedback, collective behaviors and herding
In a culmination of more than ten years of research on the science of complex system, we have thus challenged the standard economic view that stock markets are both efficient and unpredictable. The main concepts that are needed to understand stock markets are imitation, herding, self-organized cooperativity and positive feedbacks, leading to the development of endogenous instabilities. According to this theory, local effects such as interest raises, new tax laws, new regulations and so on, invoked as the cause of the burst of a given bubble leading to a crash, are only one of the triggering factors but not the fundamental cause of the bubble collapse. We propose that the true origin of a bubble and of its collapse lies in the unsustainable pace of stock market price growth. As a speculative bubble develops, it becomes more and more unstable and very susceptible to any disturbance.
Large stock market crashes are thus the social analogs of so-called critical points studied in the statistical physics community in relation to magnetism, melting, and other phase transformation of solids, liquids and gas (Sornette, 2000). This theory is based on the existence of an (unwilling) cooperative behavior of traders imitating each other which leads to progressively increasing build-up of market cooperativity, or effective interactions between investors, often translated into accelerating ascent of the market price over months and years before the crash. According to this theory, the specific manner by which prices collapsed is not the most important problem: a crash occurs because the market has entered an unstable phase and any small disturbance or process may have triggered the instability.