7.1.7: Summary Statement and Questions for the Future - Geosciences

7.1.7: Summary Statement and Questions for the Future - Geosciences


Earthquake insurance is a high-stakes game involving insurance companies, policyholders, and in some cases, governments. Because earthquakes are so rare at a given location (in a human time frame, at least), consumers tend to underestimate the need for catastrophic coverage. A Tacoma homeowner was quoted in Business Insurance, saying “My additional premium for earthquake insurance is $768 per year. My earthquake deductible is $43,750. The more I look at this, the more it seems that my chances of having a covered loss are about zero. I’m paying $768 for this?”

The demand for earthquake insurance shoots up after a catastrophic earthquake at the same time the willingness and capacity of insurance companies to offer such insurance sharply decreases. Insurance is, after all, a business, and for the business to succeed, it must make money.

Insurance companies might underestimate the premiums they should charge in a region like the Pacific Northwest, where a catastrophic earthquake (a subduction-zone or Seattle Fault earthquake rather than a Nisqually Earthquake) has not occurred in nearly two hundred years of recordkeeping. But premiums might be priced too high to attract customers in places that have recently suffered major losses, such as the San Fernando Valley or the San Francisco Bay Area. Indeed, the entire state of California might be in this fix. The CEA offers a policy with reduced coverage and higher premiums, which causes many people to drop their earthquake insurance altogether. Yet many underwriters in the insurance industry are still not convinced that the reduced policy is cost-based.

The quality of construction, particularly measures taken against earthquake shaking, will have an increasing impact on premium costs. The Institute of Building and Home Safety (IBHS), an association of insurance companies, has an Earthquake Peril Committee whose goal is the reduction of potential losses. This includes discouraging developers from building in areas at risk from earthquakes and other natural disasters. If a project is awarded an IBHS Seal of Approval, it might be eligible for hazard reduction benefits, including lower premiums.

Recently, the legislatures of Oregon and Washington have funded resilience studies to estimate what it would take to reduce the huge risk faced from a subduction-zone earthquake. Much of the analysis concerns hospitals, businesses, command centers, and lifelines, including water lines, fiber-optic cables, and bridges. Among the concerns: what happens if a business on the coast cannot return to profitability because it is unable to get its products to market, in which case the business might relocate to a safer area less at risk from earthquakes. The resilience survey for Oregon examined all major bridges and concluded that many of these bridges are obsolete and would be likely to fail in a subduction-zone earthquake. Despite this evidence, the 2015 legislature failed to pass a transportation bill that would have begun to address this problem.

California has already done similar studies, including its part of the Cascadia Subduction Zone. These results have been presented to the respective legislatures, but state governments have yet to commit sufficient resources to significantly reduce the risk. Were they to do so, the risk exposure to insurance companies would change dramatically. For summaries, see CREW (2013) and summaries for Oregon and Washington in the References.

The federal government still has not determined what its role should be, and the government responses to Hurricanes Katrina and Sandy are not encouraging. What should the general taxpayer be required to contribute? Should FEMA’s efforts include not simply relief but recovery? Aid in reconstruction rather than low-interest loans? Should earthquake insurance be mandatory for properties in which the mortgage is federally guaranteed? Should it be subsidized by the government, particularly for low-income families who are most likely to live in seismically dangerous housing but cannot afford the premiums if they are truly cost-based? The unattractiveness of the CEA mini-policy is causing many Californians to drop all earthquake coverage, which raises a new problem for the finance industry. Thousands of uninsured homeowners might simply walk away from their mortgages and declare bankruptcy if their uninsured homes are destroyed by an earthquake.

Problems such as these tend to be ignored by the public and by the government except in the time immediately following an earthquake. There is a narrow time window (teachable moment) for the adoption of mitigation measures and the consideration of ways to deal with catastrophic losses, including earthquake insurance. Authorized by their legislatures, both Oregon and Washington have designed resilience plans, but the price of resilience is steep, and thus far the governing bodies have not come up with the money to become resilient. The taxpayer appears to be willing to go along with this lack of action.

The question about earthquake damage is: who pays? This question has not been answered.

Suggestions for Further Reading

California Department of Conservation. 1990. Seismic Hazard Information Needs of the Insurance Industry, Local Government, and Property Owners of California. California Department of Conservation Special Publication 108.

Cascadia Region Earthquake Workgroup (CREW), 2013: Cascadia Subduction Zone Earthquakes: a magnitude 9.0 earthquake scenario, update 2013, 23 p.

Insurance Service Office, Inc. 1996. Homeowners insurance: Threats from without, weakness within. ISO Insurance Issues Series, 62 p.

Kunreuther, H., and R. J. Roth, Sr. 1998. Paying the Price: The Status and Role of Insurance against Natural Disasters in the United States. Washington, D.C.: Joseph Henry Press.

Oregon Seismic Safety Policy Advisory Commission (OSSPAC), 2013, The Oregon Resilience Plan: Reducing Risk and Improving Recovery for the Next Cascadia Earthquake and Tsunami: OEM/Pages/index/aspx, summary 8 p.

Palm, R., and J. Carroll. Illusions of Safety: Cultural and Earthquake Hazard Response in California and Japan. Boulder, CO, Westview Press.

Roth, R. J., Jr. 1997. Earthquake basics: Insurance: What are the principles of insuring natural disasters? Earthquake Engineering Research Institute publication.

Washington State Seismic Safety Committee, Emergency Management Council, 2012, Resilient Washington State, a framework for minimizing loss and improving statewide recovery after an earthquake: Final report and recommendations: Division of Geology and earth Resources, Information Circular 114, 38 p.

13.2 Asteroids and Planetary Defense

Not all asteroids are in the main asteroid belt. In this section, we consider some special groups of asteroids with orbits that approach or cross the orbit of Earth. These pose the risk of a catastrophic collision with our planet, such as the collision 65 million years ago that killed the dinosaurs.

Earth-Approaching Asteroids

Asteroids that stray far outside the main belt are of interest mostly to astronomers. But asteroids that come inward, especially those with orbits that come close to or cross the orbit of Earth, are of interest to political leaders, military planners—indeed, everyone alive on Earth. Some of these asteroids briefly become the closest celestial object to us.

In 1994, a 1-kilometer object was picked up passing closer than the Moon, causing a stir of interest in the news media. Today, it is routine to read of small asteroids coming this close to Earth. (They were always there, but only in recent years have astronomers been able to detect such faint objects.)

In 2013, a small asteroid hit our planet, streaking across the sky over the Russian city of Chelyabinsk and exploding with the energy of a nuclear bomb (Figure 13.13). The impactor was a stony object about 20 meters in diameter, exploding about 30 kilometers high with an energy of 500 kilotons (about 30 times larger than the nuclear bombs dropped on Japan in World War II). No one was hurt by the blast itself, although it briefly became as bright as the Sun, drawing many spectators to the windows in their offices and homes. When the blast wave from the explosion then reached the town, it blew out the windows. About 1500 people had to seek medical attention from injuries from the shattered glass.

A much larger atmospheric explosion took place in Russia in 1908, caused by an asteroid about 40 meters in diameter, releasing an energy of 5 megatons, as large the most powerful nuclear weapons of today. Fortunately, the area directly affected, on the Tunguska River in Siberia, was unpopulated, and no one was killed. However, the area of forest destroyed by the blast was large equal to the size of a major city (Figure 13.13).

Together with any comets that come close to our planet, such asteroids are known collectively as near-Earth objects (NEOs) . As we will see (and as the dinosaurs found out 65 million years ago,) the collision of a significant-sized NEO could be a catastrophe for life on our planet.

Link to Learning

Visit the video compilation of the Chelyabinsk meteor streaking through the sky over the city on February 15, 2013, as taken by people who were in the area when it occurred.

Link to Learning

View this video of a non-technical talk by David Morrison to watch “The Chelyabinsk Meteor: Can We Survive a Bigger Impact?” Dr. Morrison (SETI Institute and NASA Ames Research Center) discusses the Chelyabinsk impact and how we learn about NEOs and protect ourselves the talk is from the Silicon Valley Astronomy Lectures series.

Astronomers have urged that the first step in protecting Earth from future impacts by NEOs must be to learn what potential impactors are out there. In 1998, NASA began the Spaceguard Survey, with the goal to discover and track 90% of Earth-approaching asteroids greater than 1 kilometer in diameter. The size of 1 kilometer was selected to include all asteroids capable of causing global damage, not merely local or regional effects. At 1 kilometer or larger, the impact could blast so much dust into the atmosphere that the sunlight would be dimmed for months, causing global crop failures—an event that could threaten the survival of our civilization. The Spaceguard goal of 90% was reached in 2012 when nearly a thousand of these 1-kilometer near-Earth asteroids (NEAs) had been found, along with more than 10,000 smaller asteroids. Figure 13.14 shows how the pace of NEA discoveries has been increasing over recent years.

How did astronomers know when they had discovered 90% of these asteroids? There are several ways to estimate the total number, even before they were individually located. One way is to look at the numbers of large craters on the dark lunar maria. Remember that these craters were made by impacts just like the ones we are considering. They are preserved on the Moon’s airless surface, whereas Earth soon erases the imprints of past impacts. Thus, the number of large craters on the Moon allows us to estimate how often impacts have occurred on both the Moon and Earth over the past several billion years. The number of impacts is directly related to the number of asteroids and comets on Earth-crossing orbits.

Another approach is to see how often the surveys (which are automated searches for faint points of light that move among the stars) rediscover a previously known asteroid. At the beginning of a survey, all the NEAs it finds will be new. But as the survey becomes more complete, more and more of the moving points the survey cameras record will be rediscoveries. The more rediscoveries each survey experiences, the more complete our inventory of these asteroids must be.

We have been relieved to find that none of the NEAs discovered so far is on a trajectory that will impact Earth within the foreseeable future. However, we can’t speak for the handful of asteroids larger than 1 kilometer that have not yet been found, or for the much more numerous smaller ones. It is estimated that there are a million NEAs capable of hitting Earth that are smaller than 1 kilometer but still large enough to destroy a city, and our surveys have found fewer than 10% of them. Researchers who work with asteroid orbits estimate that for smaller (and therefore fainter) asteroids we are not yet tracking, we will have about a 5-second warning that one is going to hit Earth—in other words, we won’t see it until it enters the atmosphere. Clearly, this estimate gives us a lot of motivation to continue these surveys to track as many asteroids as possible.

Though entirely predictable over times of a few centuries, the orbits of Earth-approaching asteroids are unstable over long time spans as they are tugged by the gravitational attractions of the planets. These objects will eventually meet one of two fates: either they will impact one of the terrestrial planets or the Sun, or they will be ejected gravitationally from the inner solar system due to a near-encounter with a planet. The probabilities of these two outcomes are about the same. The timescale for impact or ejection is only about a hundred million years, very short compared with the 4-billion-year age of the solar system. Calculations show that only approximately one quarter of the current Earth-approaching asteroids will eventually end up colliding with Earth itself.

If most of the current population of Earth-approaching asteroids will be removed by impact or ejection in a hundred million years, there must be a continuing source of new objects to replenish our supply of NEAs. Most of them come from the asteroid belt between Mars and Jupiter, where collisions between asteroids can eject fragments into Earth-crossing orbits (see Figure 13.15). Others may be “dead” comets that have exhausted their volatile materials (which we’ll discuss in the next section).

One reason scientists are interested in the composition and interior structure of NEAs is that humans will probably need to defend themselves against an asteroid impact someday. If we ever found one of these asteroids on a collision course with us, we would need to deflect it so it would miss Earth. The most straightforward way to deflect it would be to crash a spacecraft into it, either slowing it or speeding it up, slightly changing its orbital period. If this were done several years before the predicted collision, the asteroid would miss the planet entirely—making an asteroid impact the only natural hazard that we could eliminate completely by the application of technology. Alternatively, such deflection could be done by exploding a nuclear bomb near the asteroid to nudge it off course.

To achieve a successful deflection by either technique, we need to know more about the density and interior structure of the asteroid. A spacecraft impact or a nearby explosion would have a greater effect on a solid rocky asteroid such as Eros than on a loose rubble pile. Think of climbing a sand dune compared to climbing a rocky hill with the same slope. On the dune, much of our energy is absorbed in the slipping sand, so the climb is much more difficult and takes more energy.

There is increasing international interest in the problem of asteroid impacts. The United Nations has formed two technical committees on planetary defense, recognizing that the entire planet is at risk from asteroid impacts. However, the fundamental problem remains one of finding NEAs in time for defensive measures to be taken. We must be able to find the next impactor before it finds us. And that’s a job for the astronomers.

7.1.7: Summary Statement and Questions for the Future - Geosciences

The Linear Structure of a Text

The most obvious way to divide a text is into a linear hierarchy of units, with each unit being embedded within larger units and being composed of one or more smaller units. The traditional method of outlining biblical books depends upon this principle of analyzing a text. Longacre (1983b, 285) has posited that any text can be analyzed hierarchically by distinguishing eight levels of units: discourse, paragraph, sentence, clause, phrase, word, stem, and morpheme. Generally speaking, units on a lower level combine to form units on a higher level however, levels can be skipped so that, for example, a sentence can be analyzed as being a combination of words. This is due to the fact that it is possible to have one unit constructions. There can be one paragraph discourses, one sentence paragraphs, one clause sentences (usually called "simple sentences"), one word phrases, and one morpheme words. It is even possible to collapse all the levels so as to have a one morpheme discourse, as when someone shouts "Fire!"

In addition, Longacre (1983b, 279-280) has noted that it is possible for units to be formed recursively. A paragraph may be composed of two or more paragraphs. A word may be composed of two or more words for example, the word 'football' is made by combining the words 'foot' and 'ball'. Recursion can also work in combining elements that are not on the same level. A paragraph can be composed of a topic sentence plus an amplification paragraph. A prepositional phrase can be composed of recursively embedded prepositional phrases (e.g., "the power of the Spirit of the God of heaven"). This kind of recursion can also happen on the word level (e.g., 'right', 'righteous', and 'righteousness').

Longacre (1983b, 280-281) has also noted a third kind of combination of units that he calls backlooping. This is where higher level units are embedded within lower level units. A typical example of such a construction is a relative clause modifying a noun phrase (e.g., "the God who brought Israel out of Egypt"). Another common occurrence of backlooping occurs when a quoted paragraph is embedded in the object slot of a quotative sentence. But backlooping can even happen in some not so common ways. For example, a noun phrase can be embedded in a slot that usually expects a noun, such as "the King of England's crown" where the phrase "King of England" is marked with a possessive morpheme just like a noun would be. Both Pike (1967, 107) and Longacre (1983b, 280) have noted Martin Luther King, Jr.'s, "see-how-far-you've-come-ism," where a whole clause is embedded in a slot that usually takes a noun stem.

The Colon as Linguistic Sentence

A question arises at this point as to exactly what the linguistic declarative sentence is in Greek. Two endmarks of punctuation are used in declarative text by modern editors of Greek texts: the period and the colon. The period defines the end of the Greek sentence in current usage, and the raised dot (also called a 'colon') defines the end of the colon. The colon is in every respect a linguistic sentence: the nucleus is an independent clause and it is modified by various types of subordinate clauses. In his work on New Testament Greek semantics, Louw notes, "in this analysis the colon is defined, not in terms of its semantic unity, but in terms of certain specific grammatical structures which in many ways parallel what would be regarded as sentences in English" (1982, 95). It may well be that what modern editors mark as a multi-colon Greek sentence corresponds to a simple type of paragraph. This same kind of confusion as to what a linguistic sentence is exists in English. Charles Fries (1952, 10-11) once asked a number of English teachers to decide how many sentences existed within a text that could be punctuated with both periods and semi-colons. They could not agree on the actual number of sentences in the text. Despite the ambiguity as to what constitutes a sentence, in both Greek and English, it seems best to choose the colon as the linguistic sentence, since it is the minimal possible sentence.

This colon marked in current editions of the Koiné Greek New Testament should not be confused with what the ancient Greek grammarians referred to as a kwlon 'colon', for this unit corresponds more to the modern clause (Demetrius, On Style I [§1-8]). The colon as marked by modern editors was called a periodoV 'period' by ancient grammarians such as Demetrius (On Style I [§10-11]).

Louw (1982) has also chosen to make the colon the unit of choice for discourse analysis in the Greek New Testament. Besides the basic fact that the Greek colon as currently marked seems to correspond to the linguistic sentence, Louw gives an additional reason for using the terminology colon to describe the linguistic unit analyzed: "in certain linguistic analyses the term sentence (with the abbreviation S) has been employed in speaking of any syntactic string which may be less or even more than a colon" (1982, 100).

This study differs from Louw's use of the term in only one respect: Louw rejects the possibility of having a compound colon. He writes, "All of this means that so-called simple sentences and complex sentences (those with dependent clauses) are regarded as colons, while so-called coordinate sentences (those in which potentially independent clauses are combined by coordinate conjunctions) are regarded as consisting of two or more colons" (Louw 1982, 102). There are four reasons for not following Louw in his rejection of compound colons. First, the standard Greek punctuation of colons in current editions of the New Testament sometimes includes coordinated independent clauses within a single colon. To redefine the colon as Louw does would have each researcher working with different units. Second, Louw wants "the man went to Boston and the boy played in his room" (1982, 101) to be two colons, while he understands "the horse and the bull are grazing" (1982, 101) to be a single colon with a compound subject (although it is typically analyzed as two kernel sentences in the deep structure) and "my good friend came and gave me a book" (1982, 97) to be a single colon with a compound predicate. Against this is the fact that a Greek verb can be a colon on its own, since subject agreement is marked on the verb and can function as an indicator of the subject of the clause. Thus a compound predicate can usually also be analyzed as compound clauses in Greek. Third, the fact that one can have compound subjects in a subject slot and compound predicates in a predicate slot would argue that by analogy one could also have compound clauses in a clause slot of a colon. Finally, evidence indicates that kai 'and' often occurs between clauses in a compound colon but rarely between colons. Of 105 instances of kai 'and' where the word occurs as the only conjunction in uncontracted form in I Corinthians, 95 occur within the colon and only 10 occur at the beginning of a colon. This is similar to the findings of Levinsohn in the book of Acts, where he discovered that kai 'and' was used mainly to join elements within what he called development units (1987, 96-120).

With this brief introduction to the concept of four kinds of embedding (normal, skipped, recursive, and backlooped) and the selection of the Greek colon as the linguistic sentence, it is possible to proceed to analyze the text of I Corinthians as a combination of units or particles. This study will focus on the relationships of the higher level units, especially paragraphs.

All sentences in a paragraph share some kind of relationship with one another. Using Pike's four-celled tagmeme as a descriptor, that relationship can always be described in terms of role. More will be said about the role relationship further in this chapter. For the present, the question must be posed: Are there relationships between sentences which bind them together in paragraphs and yet these relationships can be described in a purely structural way (i.e., in tagmemic terms, can be described merely using slot and class)? The answer is yes. There are several kinds of paragraphs in I Corinthians that are marked by grammatical features in the surface structure.

First, there is a question-answer paragraph that in its simplest form consists of two colons: the first a question and the second an answer. Examples of this in I Corinthians include 11:22 and 14:15, as shown in (29).

Second, there is a question-command paragraph that consists of a question followed by a command. This form often functions as a type of conditional command. If the question can be answered affirmatively, the command should be obeyed. Examples of this include 7:18 (bis), 21, and 27 (bis), as shown in (30).

For a field perspective of the various patterns in this structure, see (40) below. The example in 7:21 is of a double command, the second one introduced by all ' ei 'but if', with the conditional clause introducing an additional condition, as shown in (31).

Third, there are paragraphs that show a grammatical chiastic structure. Examples of these include 9:20-22, 10:7-10, and 13:8-13, as shown below in (51), (52), and (62). Such paragraphs can be viewed either as structures from a particle perspective or as patterns from a field perspective. But since non-linear paragraphs are handled better from the field perspective, these will be discussed below in more detail in the section on chiasmus under the heading of field.

Finally, there are paragraphs which are composed of parallel units, either smaller embedded paragraphs or linguistic sentences (colons). These can be categorized by whether they are composed of two or more than two units, here called binary or multiple, respectively. They can also be categorized by whether they are composed of statements, questions, or commands. Where these parallel structures are composed of two or three colons, they are sometimes referred to as couplets or triplets, respectively.

There are three examples in I Corinthians of paragraphs composed of parallel microparagraphs, that is, low level paragraphs whose only constituent units are linguistic sentences (colons). All examples are binary, limited to two parallel units. I Corinthians 15:39-41 is an example of two parallel microparagraphs involving statements. The Today's English Version (TEV) starts a new orthographic paragraph in the middle of this structure, but such would not seem to fit the Greek text. I Corinthians 7:18 and 7:27 are examples of two parallel microparagraphs involving questions, as shown above in (30).

Most of the examples of parallelism involve colons rather than microparagraphs. By far the greatest number of parallel structures involve binary colon statements. There are varying degrees of parallelism in I Corinthians, but the following are clear examples of this type of microparagraph: 3:5, 14-15 6:12 7:22 9:17 10:21, 23 11:4-5, 8-9 12:15-16, 26 14:4, 15 and 16:23-24. I Corinthians 12:26 is shown in (32) as an example.

There are also several examples of parallel structures that involve binary colon questions. Among the clearest examples are 7:16 9:1, 5-6 10:16 11:22 (bis) and 15:55, with the latter shown in (33) as an example.

I Corinthians also contains some examples of parallel binary colon commands. Among these are 7:12-13 10:25 and 27 and 14:28 and 30. I Corinthians 7:12-13 is shown in (34) as an example.

Turning from binary to multiple colon parallelism, there are several examples of triple colon statements in I Corinthians. Among the clearest are 4:8, 10 7:32-34 12:4-6 13:1-3 and 15:42-44. I Corinthians 12:4-6 is given in (35) as an example.

There are also five examples of parallelism in multiple colon questions: 1:20 9:7 12:17 and 19 12:29 and 12:30. The second one is given as an example in (36).

All of these examples are triplets, except for 12:29, which contains four grammatically parallel questions.

Thus, the book of I Corinthians contains four basic types of grammatically structured paragraphs: question-answer, question-command, chiastic, and parallel. These units form the smallest types of paragraphs in I Corinthians. Ideally, any analysis of paragraph structure in this book would not start a new paragraph in the middle of one of these units. Unfortunately, in English translations, this has not always been the case, as we shall shortly see.

Using analytical techniques, discourse analysis does not always turn up the same paragraph junctures that are marked by translators and editors in a text. Even translators and editors differ among themselves as to exactly where a new paragraph should begin. Some do not begin paragraphs very often, while others begin paragraphs rather frequently. A comparison of paragraph beginnings between the New American Standard Version (NASV) and the Today's English Version (TEV), as shown in Table 6, will bear this out. The translators of the New American Standard Version begin new paragraphs less frequently than the editors of the Greek texts, while the Today's English Version begins new paragraphs with such a frequency that they cut across structural Greek paragraphs and even colon boundaries. This technique may be legitimate paragraphing for a simple English translation (for English paragraphing rules may well vary from Greek rules), but it is of little use to the discourse analyst who is trying to draw on the understanding of others to help determine paragraph boundaries in the Greek text.

Orthographic paragraphing is of limited use in discourse analysis because it generally ignores the recursive nature of paragraphs. Most translations have only one level of paragraph indication. An exception is the 26th edition of the Nestle-Aland Novum Testamentum Graece, which indicates three levels of paragraphing by orthographic technique: major section breaks are indicated by spacing before a paragraph, major paragraph breaks are indicated by indention from the left margin, and minor paragraph breaks are indicated by additional spacing within a line. Where translations indicate only one level of paragraphing, there is little indication as to whether indention is taking place to signify major paragraphs, intermediate paragraphs, or minor paragraphs.

However, because different translations and editions indicate different levels of paragraphing, they can be compared to form a general idea of the relative level of the paragraph breaks in a text. In Table 6, two editions of the Greek text of I Corinthians and seven English translations are compared as to paragraph breaks. Those breaks in which seven to nine of the editions and translations agree can be considered major paragraph breaks. In the same way, breaks on which there is agreement between four to six can be considered intermediate and breaks with agreement on only one to three can be considered minor paragraph breaks. The assignment of the classifications major, intermediate, and minor to groups of three is an arbitrary one based on a linear progression however, it is reasonable that a change in topic which more editors and translators notice is more likely to be more significant than one which fewer editors and translators notice.

Table 6 also lists three other grammatical indications of paragraphing: the presence of vocatives and the word idou 'behold', the use of first person verbs in the colons preceding and following the break, and the use of second person verbs in the colons preceding and following the break. By way of clarification, the term colon following the break is used to refer to the first colon in the new paragraph and the term colon preceding the break is used to refer to the last colon in the previous paragraph.

Vocatives are commonly used to signify the beginning of a paragraph in Greek (cf. Miehle 1981, 98 and Longacre 1983a, 3, 13, 22, 25, 30 for I John as well as Hymes 1986, 80 and Terry 1992, 113, 118 for James). Eighteen of the twenty-five vocatives in I Corinthians occur in the colons that begin paragraphs. In addition, three vocatives (two in 7:16 and one in 7:24) occur in the final colon of a paragraph. The first discourse not only contains a vocative ( adelfoi 'brothers') in its first colon in 1:10, but also a vocative ( adelfoi mou 'my brothers') in its second colon in 1:11. The remaining three vocatives (one in 15:31 and two in 15:55) are found in the eighth discourse in what is probably peak material (see chapter V of this study for further discussion of peak). The vocative in 15:31 is omitted by many manuscripts, probably because it is not used in this place in the normal Greek way of beginning a paragraph. It is also possible to treat marana (Aramaic for 'Lord') in 16:22 as a vocative, although it is not likely that the transliterated Aramaic marana qa 'O Lord, come' is a paragraph by itself, as the New English Bible (NEB) prints it.

The Greek word idou 'behold' is a particle used as an exclamation, not a vocative however, it often functions in the same way as a vocative in marking the beginning of paragraphs in Greek. For this reason, the Revised Standard Version (RSV) and the Today's English Version (TEV) mark 15:51 as the beginning of a new paragraph however, there are structural parallels between 15:50 and 15:51 that indicate that they belong together. Any paragraph that 15:51 begins must be a minor paragraph indeed.

In epistolary text, it is common for the writer to refer to himself and to the readers. This is especially true around paragraph boundaries where the writer is more likely to relate the discussion of general principles to the parties involved. Table 6 lists whether first and second person verbs are found in the colon following or preceding the paragraph boundary or both. Their presence or absence is summarized in Table 7. Table 7 shows that there is an direct relationship between any interpersonal endings and paragraph level. Among major paragraph breaks, 56 (84.8%) had either first or second person verbs in the surrounding colons. Intermediate paragraph breaks showed 23 (79.3%) with interpersonal verbs in the colons on either side of the break. Minor paragraph breaks showed 41 (73.2%) with interpersonal verbs in the surrounding colons. And substructural paragraph breaks showed only 5 (50%) with interpersonal verbs. Thus the higher the paragraphing level, the more likely interpersonal verb endings (either first or second person) are to occur in the surrounding colons. In addition, on a discourse level all 10 (100%) of the discourses in the letter show either first or second person verbs in the colons surrounding the beginning of the discourses.


Thus, this tendency for paragraph breaks is especially true for discourse breaks. Table 8 shows the boundary markers for the beginnings of the ten proposed discourses in I Corinthians plus the introduction and conclusion. All ten discourses show either the first or second person in the first colon in the discourse, with half of them showing both. Only the boundary at 7:1 shows the first colon not containing a first person verb. In addition, all the boundaries except for 15:1 show either first or second person in the preceding colon. And all of the discourses begin with the Greek conjunction de 'now'. It is worth noting that if the beginning of the second discourse is chosen to be 5:1 instead of 4:18 then all but one of these generalizations are not valid. This tends to confirm the conclusion of chapter III that the second discourse begins at 4:18 rather than 5:1.


Boundary Introductory Words Vocative 1st person 2nd person
1:1Int Following
1:101 de NowbrothersFollowingBoth
4:182 de Now Both
7:13 Peri de Now about Both
8:14 Peri de Now about Both
11:25 de Now BothBoth
11:176 de Now Both
12:17 Peri de Now about brothersBothFollowing
15:18 de NowbrothersFollowingFollowing
16:19 Peri de Now about FollowingBoth
16:1210 Peri de Now about BothPreceding

Finally, Table 9 shows a summary of beginning words for paragraph breaks on the major and intermediate levels. It is worth noting that de is the overwhelming conjunction of choice for beginning major paragraphs. This is similar to the result that Levinsohn found in analyzing conjunctions in the book of Acts, where he found that de was used to connect major segments that he labeled development units (1987, 83-96). The word gar 'for' signaling an explanation to follow is second with six usages. It is also significant that 27 (41%) major paragraphs begin without any conjunction, while only 2 (7%) intermediate paragraphs show no conjunction at the beginning.


Not too much should be made of the fact that two words are used to begin major paragraphs but not intermediate paragraphs. This may only mean that they are not used often enough in this text to occur in this role. The four words that begin intermediate paragraphs but not major paragraphs are more significant. It is also possible here that this lack is due to a rarity of usage. However, the concepts of consequence ( dio and ouv 'therefore') and alternative ( h 'or') which three of the words embody suggest subordinate ideas to follow and are thus perhaps to be expected on an intermediate level. At any rate, it is noteworthy that oun 'therefore' begins three intermediate paragraphs but no major paragraphs.

Advantages of Constituent Structure Analysis

The study of orthographic paragraphs, while useful, can only take the discourse analyst so far into the discourse. Generally such paragraphs are the result of intuitive guesses by editors and translators rather than being based on any kind of structural analysis. To really examine the paragraph structure in depth, one must turn to a study of the relationships between recursively embedded paragraphs. Louw has noted, "in general any total discourse that is longer than one paragraph must obviously be analyzed primarily in terms of the relationships between the constituent paragraphs" (1982, 98).

There are many ways to analyze these relationships in a text from a particle perspective. But constituent structure analysis has certain advantages over other methods of analysis. First, it focuses the analysis on role, the basic relationship between higher level units. The analyst is forced to identify the role that each unit plays in the discourse and the primary unit to which it is related. Second, it shows clearly the level of embedding of each unit in the discourse. An outline also shows level of embedding, but the embedding is based on topics and subtopics rather than on the relationships between units. Constituent structure analysis, on the other hand, focuses on the grammatical hierarchy as well as the conceptual. Third, it takes into account the texttype and structural characteristics of the units (such as parallelism, question forms, and cyclical and chiastic presentations) under investigation. Fourth, it charts enough variables so as to allow the analyst to categorize types of paragraphs by kind of branching, level of embedding, and texttype. And finally, it allows the analyst to relate the results to a salience level chart and formulate a theory of verb ranking for a given text. When enough texts have been analyzed, this permits the analyst to formulate theories of salience levels and verb ranking for different texttypes and even genrés.

Constituent Paragraph Structure

Now relationships between paragraphs are not always overtly marked. Rather they are often inherent only in the meaning of the paragraphs. For this reason Young, Becker, and Pike speak of a generalized plot as "a sequence of semantic slots" (1970, 319). On a lower level, paragraphs also may be said to exhibit plots (Young, Becker, and Pike 1970, 320). These plots are often marked on the surface structure of a text by what may be called plot cues. Plot cues are words and phrases which "indicate the relationship of one linguistic unit to another within a specific, or surface, plot" (Young, Becker, and Pike 1970, 322). Now since the term plot is usually reserved for narrative texttype, it is perhaps better to refer to these overt markers as relational cues. If paragraph B is an instance of paragraph A, it may well begin with a relational cue such as for example or for instance. If paragraph B contains a cause for paragraph A or a reason for it, the relational cues because, since, therefore, or consequently may be found in the text (Young, Becker, and Pike 1970, 322).

But even where such overt markers do not exist, the semantic relationships between paragraphs which they signify do. In commenting on a Beekman-Callow relational structure tree diagram of I John, Miehle has noted, "Even on the lower levels of structure, I have been prompted more by the semantic rather than the grammatical structure" (1981, 105). This is where Pike's four-celled tagmeme defined in the first chapter of this study becomes a useful tool. The third cell is that of role, an acknowledgement that grammar is more than syntax it contains an element of semantics even within its structure.

For example, in Greek the category voice is used to distinguish active, middle, and passive. These categories do not just refer to structural forms, but to semantic relationships within the sentences within which they are used. Now even when the structure of the middle and passive are the same, the relationships signified by the middle and the passive are quite different. Further, these relationships are grammatical, not merely conceptual. There is a significant semantic difference, but not an ultimate conceptual difference, between "The key turned in the lock," and "The key was turned in the lock." In both, the speaker and listener may conceptualize a person turning the key, even though neither sentence specifies such. The semantic difference is entirely due to the grammar, not to the conceptual picture drawn by word choice. Pike's inclusion of role in the grammatical tagmeme allows this semantic element to be presented as an integral part of grammar, thus emphasizing his idea that units should be treated as form-meaning composites (1982, 111-113).

Longacre has taken this concept a step further by analyzing two-celled paragraph tagmemes with what can be taken as Role:Class instead of the traditional Slot:Class. This is consistent with the two-celled tagmeme since originally both were combined (e.g., Slot could be filled by subject-as-actor, where actor is a Role Pike 1982, 77). Role seems to be more significant in determining relationship than slot does. Longacre has given a fairly detailed treatment of this method of analysis (1970 1980). It is well illustrated for a biblical text in his analysis of I John (1983a) and in the fourth chapter of his book Joseph (1989a, 83-118), an analysis of the Hebrew text of Genesis 37 and 39-48.

Several of the different types of paragraphs which have been identified to date based on role are listed in Table 10. The terminology in the table is generally that of Longacre, who labels the head or nucleus of the paragraph the thesis, although at one time he used the term text for some units (1989a, 83-118). Also following Longacre (1989b, 450-458), his earlier terminology for constituent elements of the sequence paragraph has been changed here from Build-up (BU) to Sequential Thesis (SeqT). The term build-up applies best to narrative material before the climax, but even in this material an item in the sequence may not build up the storyline. In the same way, the coordinate paragraph is sometimes analyzed as two items rather than two theses (Longacre 1989a, 116). Amplification and clarification paragraphs are similar, but the former merely gives additional information, while the latter does so in order to make the thesis clear. Clendendon (1989, 131) has labeled the evidence paragraph the attestation paragraph. But the terminology followed here is current and understandable.

Most of the entries in Table 10 are listed as right branching paragraphs, that is, paragraphs in which the thesis comes first. The exceptions are the condition paragraph and the quote paragraph, both of which are left branching, that is, paragraphs in which the thesis comes last. These are the normal (unmarked) ordering for these paragraphs, but it is possible for paragraph types which are normally right branching to be left branching and vice versa. There are three other possibilities listed in Table 10. Although it may have an introduction as a left branch, the simple paragraph is often without such a branch, having only a head or nucleus. Next, the coordinate paragraph, the dialogue paragraph, and the simultaneous paragraph are usually double headed, although they may be multiple headed. Finally, the sequence paragraph is usually multiple headed.

In addition, paragraphs can be categorized according to structural features such as the ones illustrated in (29) through (36). Following Longacre's terminology, the question-answer paragraph can be called rhetorical question-answer or simply rhetorical for short, the question-command paragraph called rhetorical command, the chiastic paragraph called chiastic, and the parallel paragraph called parallel (1979b, 131). When paragraphs have rhetorical and rhetorical command structure, they often become left branching. Longacre also has identified running quote and cyclic paragraphs as further examples of what he calls stylistic types (1979b, 131). A paragraph can thus be identified by a combination of its stylistic structural type, its branching direction, its texttype, and its basic role relationship as listed in Table 10.

Illustrations of this method of analysis are given in Tables 11 (for I Cor. 1:10-17), 12 (for I Cor. 2:6-16), 13 (for I Cor. 3:10-15), 14 (for I Cor. 6:12-20), and 15 (for I Cor. 10:23-11:1). All of these are major paragraphs according to the study of orthographic paragraphs done above.


But this method of analysis provides a much clearer picture about the relationships, the level of embedding, and even the boundaries between paragraphs than a study of orthographic paragraphs does. For example, Table 6 shows minor paragraphs beginning at 2:10 and 2:14 however, Table 12 shows that 2:10 is actually a place where a series of right branching paragraphs ends and the relationship returns to a higher level paragraph. In the same way, 2:14 is the second half of an antithetical paragraph, and the contrast has proven a good place to mark an orthographical paragraph. The analysis also shows that 2:10b is not a ideal place to mark an orthographic paragraph (as the NIV and NEB have done) because to do so obscures the relationships.

The sample sections analyzed in Tables 11-15 have been chosen to give a cross-section of material from different texttypes. Table 11 shows a combination of texttypes, Table 12 shows a text of primarily persuasive texttype, Table 13 has a text of mostly expository texttype, and Tables 14 and 15 are mainly hortatory texttype. The assignment of texttype here is arbitrary, based upon an intuitive assessment of purpose. A charting of texttype by this writer is shown in Appendix A. The sections analyzed have also been chosen from material which is non-peak in nature, so that any shift in grammatical markers due to peak will not be a factor. Peak will be discussed further in the next chapter of this study.


Read through the following statements/questions. You should be able to answer all of these after reading through the content on this page. I suggest writing or typing out your answers, but if nothing else, say them out loud to yourself.

Nuclear energy has been a hot-button issue for a very long time, both domestically and internationally. It provides a significant portion of the global electricity supply, as you will see in the image below.


As you (hopefully) recall from Lesson 1, nuclear energy is non-renewable. Uranium is by far the most-used nuclear fuel, though there are possible alternatives (such as thorium). As with other non-renewable fuels, all of the uranium that is on earth now is all that we will ever have, and estimates can be made of the remaining recoverable resources. As you will see in the article below, at current rates of consumption, we will not run out of uranium any time soon. But - at risk of sounding like a broken record - this depends very highly on a number of variables, including keeping consumption at current levels, technology not advancing, estimates of reserves changing, and so forth. If, for example, we waved a magic wand and doubled the output of nuclear power tomorrow, the estimated reserves would last half as long.

The World Nuclear Association (WNA), an industry association, provides a very thorough explanation of possible complicating factors, but they state that at current rates of consumption, the world has enough reserves to last about 90 years. The Nuclear Energy Agency (NEA), like the WNA, is effectively an industry group and has a wealth of expertise at its disposal. It operates out of the OECD (remember them from Lesson 1?) in Paris. They are a pro-nuclear group but are very good at providing technical data, as well as statistics. They indicate that as of 2018, the world had about a 130 year supply of uranium.

Optional Reading

The author of the article below provides a number of reasons why nuclear energy will not play a large role in the global energy future.


The first nuclear power plant came online in 1954 in Russia (then the Soviet Union), and according to the World Nuclear Association, there are 443 reactors worldwide and another 53 under construction. The technology is well-known by now, and despite the extreme danger posed by nuclear meltdowns, there have been very few major incidents. You are probably familiar with the Fukushima Daichi meltdown that happened in 2011, and perhaps heard of Chernobyl in the Ukraine in 1986 (still the worst nuclear disaster to date), and maybe even Three Mile Island in the U.S. in 1978. Here is a partial list of nuclear accidents in history from the Union of Concerned Scientists (UCS).

But putting aside this risk at the moment, nuclear energy has shown itself to be a viable source of electricity, and likely will continue to be used for the foreseeable future. Among other things, nuclear power plants generally have a useful lifetime of around 40-60 years, so we are "locked in" until mid-century at least. That said, increasing the use of today's nuclear technology would likely pose some problems, for a variety of reasons. The article below sums up these and a few others reasons for and against nuclear energy.

Sustainability Issues

Okay, now for the fun part. Nuclear energy is a mixed bag in terms of the question of sustainability. The biggest dilemma for those concerned about anthropogenic climate change but skeptical of nuclear is that nuclear energy is considered a carbon-free source, and since it is responsible for a significant portion of non-fossil fuel based electricity production worldwide and is a proven and reliable source, it is seen by many as a good option. Note that despite being considered "carbon free," nuclear energy results in some lifecycle emissions because of the materials used in mining, building the power plant, and so forth. (Lifecycle emissions are all the emissions generated by all processes required to make an energy source, including things like mining of materials, manufacturing of equipment, and operating equipment.) But according to the National Renewable Energy Laboratory (NREL), a U.S. National Lab, it has approximately the same lifecycle emissions as renewable energy sources.

Nuclear energy is a very reliable source of electricity, and power plants can operate at near full capacity consistently. Once a plant is built, electricity is relatively inexpensive to generate. But nuclear energy is very expensive in terms of lifetime costs (as you'll see in the article below), and the waste from nuclear reactors can remain dangerous for thousands of years, which can result in large externalities. Since they are so expensive, there is an incentive to keep a plant online for as long as possible to recoup costs, thus people are effectively "locked in" once a plant is built. There is, of course, the risk of another disaster, which however rare the possibility, could be catastrophic. There are also some issues with the equity impacts of uranium, particularly in terms of mining. There is not an easy answer here, as there are reasonable and strong pros and cons.

To Read Now

The first article below is a good example of why it pays to pay attention to citations and be well-informed on a topic, in regards to finding good information sources. The article is on a website that I've never heard of before, so at first, I was suspicious of the content. However, they provide legitimate sources for the information presented, and I have enough prior knowledge to know that the arguments they put forth are legitimate. Overall, it's a good summary of some of the pros and cons of nuclear energy, though I have a few minor issues with the content, as I'll describe below. (See if you can figure out what I take issue with.)

  • "Pros and Cons of Nuclear Energy."
  • (Optional) "Unable to Compete on Price, Nuclear Power on the Decline in the U.S." Brian Mann, PRI.
  • (Optional) "Nuclear, Carbon Free but Not of Unease." Henry Fountain, New York Times.
  • (Optional) "Nuclear Power Prevents More Deaths Than It Causes." Mark Schrope, Chemical, and Engineering News.

Did you guess the two issues I have with the first article? First, the author calls nuclear a very "efficient" energy source. If you recall from previous lessons, the efficiency of a nuclear power plant hovers around 35%. It is, however, energy dense (a lot of energy by volume), which is what he describes as "efficient." (Though he also mentions energy density as well, confusingly.) The second - and more subtle - problem I have is with the assertion that nuclear is an "inexpensive" energy source. This was clearly indicated in the second article (if you read it) but is also asserted by the EIA. Nuclear plants are inexpensive to run once they are built, but they are extremely expensive to build. The author glosses over that part, but it is a really important consideration.

Regarding the cost of nuclear: The high up-front cost makes nuclear power one of the most expensive types of electricity available. For a technical discussion of this, feel free to read through this description of levelized cost of electricity from the EIA, which indicates that over the lifetime of the energy source, nuclear is more expensive than geothermal, onshore wind, solar, hydroelectric, and most types of natural gas plants.


Nuclear is a mixed bag. To summarize:

  • Nuclear is reliable and almost carbon-free, but is non-renewable.
  • Nuclear is relatively inexpensive to operate after established, but the high up-front cost makes it one of the most expensive electicity sources.
  • Because power plants are so expensive to build, once they are built they are generally used for as long as possible, as long as they can be operated profitably. (Gotta get that investment back!) We are effectively "locked in" once they are built.
  • When accidents happen, they can be catastrophic, but they are extremely rare.
  • The waste product from nuclear power plants is dangerous for thousands of years, and right now we have no way of safely disposing of it - it is kept in storage, usually at the power plants themselves. This has not shown to be a major problem yet, but society will be dealing with the wastes for thousands of years.

Nuclear is a very controversial source of energy. It is embraced by many as a key to the a carbon-free future, while many think we should move away from it because of its inherent danger and/or expense and/or general sustainability problems. There are arguments to be made on each side. Hopefully, you have a better handle on some of them after reading through this.

Check Your Understanding

Why are we "locked in" to the use of nuclear energy once a plant is built?

Optional (But Strongly Suggested)

Now that you have completed the content, I suggest going through the Learning Objectives Self-Check list at the top of the page.


Key Message 1: Observed Changes in Global Climate

Global climate is changing rapidly compared to the pace of natural variations in climate that have occurred throughout Earth’s history. Global average temperature has increased by about 1.8°F from 1901 to 2016, and observational evidence does not support any credible natural explanations for this amount of warming instead, the evidence consistently points to human activities, especially emissions of greenhouse or heat-trapping gases, as the dominant cause. (Very High Confidence)

Description of evidence base

The Key Message and supporting text summarize extensive evidence documented in the climate science literature and are similar to statements made in previous national (NCA3) 1 and international 249 assessments. The human effects on climate have been well documented through many papers in the peer-reviewed scientific literature (e.g., see Fahey et al. 2017 18 and Knutson et al. 2017 16 for more discussion of supporting evidence).

The finding of an increasingly strong positive forcing over the industrial era is supported by observed increases in atmospheric temperatures (see Wuebbles et al. 2017 10 ) and by observed increases in ocean temperatures. 10 , 57 , 76 The attribution of climate change to human activities is supported by climate models, which are able to reproduce observed temperature trends when radiative forcing from human activities is included and considerably deviate from observed trends when only natural forcings are included (Wuebbles et al. 2017 Knutson et al. 2017, Figure 3.1 10 , 16 ).

Major uncertainties

Key remaining uncertainties relate to the precise magnitude and nature of changes at global, and particularly regional, scales and especially for extreme events and our ability to simulate and attribute such changes using climate models. The exact effects from land-use changes relative to the effects from greenhouse gas emissions need to be better understood.

The largest source of uncertainty in radiative forcing (both natural and anthropogenic) over the industrial era is quantifying forcing by aerosols. This finding is consistent across previous assessments (e.g., IPCC 2007, IPCC 2013 249 , 250 ).

Recent work has highlighted the potentially larger role of variations in ultraviolet solar irradiance, versus total solar irradiance, in solar forcing. However, this increase in solar forcing uncertainty is not sufficiently large to reduce confidence that anthropogenic activities dominate industrial-era forcing.

Description of confidence and likelihood

There is very high confidence for a major human influence on climate.

Assessments of the natural forcings of solar irradiance changes and volcanic activity show with very high confidence that both forcings are small over the industrial era relative to total anthropogenic forcing. Total anthropogenic forcing is assessed to have become larger and more positive during the industrial era, while natural forcings show no similar trend.

Key Message 2: Future Changes in Global Climate

Earth’s climate will continue to change over this century and beyond (very high confidence). Past mid-century, how much the climate changes will depend primarily on global emissions of greenhouse gases and on the response of Earth’s climate system to human-induced warming (very high confidence). With significant reductions in emissions, global temperature increase could be limited to 3.6°F (2°C) or less compared to preindustrial temperatures (high confidence). Without significant reductions, annual average global temperatures could increase by 9°F (5°C) or more by the end of this century compared to preindustrial temperatures (high confidence).

Description of evidence base

The Key Message and supporting text summarize extensive evidence documented in the climate science literature and are similar to statements made in previous national (NCA3) 1 and international 249 assessments. The projections for future climate have been well documented through many papers in the peer reviewed scientific literature (e.g., see Hayhoe et al. 2017 24 for descriptions of the scenarios and the models used).

Major uncertainties

Key remaining uncertainties relate to the precise magnitude and nature of changes at global, and particularly regional scales, and especially for extreme events and our ability to simulate and attribute such changes using climate models. Of particular importance are remaining uncertainties in the understanding of feedbacks in the climate system, especially in ice–albedo and cloud cover feedbacks. Continued improvements in climate modeling to represent the physical processes affecting the Earth’s climate system are aimed at reducing uncertainties. Enhanced monitoring and observation programs also can help improve the understanding needed to reduce uncertainties.

Description of confidence and likelihood

There is very high confidence for continued changes in climate and high confidence for the levels shown in the Key Message.

Key Message 3: Warming and Acidifying Oceans

The world’s oceans have absorbed 93% of the excess heat from human-induced warming since the mid-20th century and are currently absorbing more than a quarter of the carbon dioxide emitted to the atmosphere annually from human activities, making the oceans warmer and more acidic (very high confidence). Increasing sea surface temperatures, rising sea levels, and changing patterns of precipitation, winds, nutrients, and ocean circulation are contributing to overall declining oxygen concentrations in many locations (high confidence).

Description of evidence base

The Key Message and supporting text summarize the evidence documented in climate science literature as summarized in Rhein et al. (2013). 31 Oceanic warming has been documented in a variety of data sources, most notably by the World Ocean Circulation Experiment (WOCE), 251 Argo, 252 and the Extended Reconstructed Sea Surface Temperature v4 (ERSSTv4). 253 There is particular confidence in calculated warming for the time period since 1971 due to increased spatial and depth coverage and the level of agreement among independent sea surface temperature (SST) observations from satellites, surface drifters and ships, and independent studies using differing analyses, bias corrections, and data sources. 20 , 33 , 68 Other observations such as the increase in mean sea level rise (see Sweet et al. 2017 76 ) and reduced Arctic/Antarctic ice sheets (see Taylor et al. 2017 122 ) further confirm the increase in thermal expansion. For the purpose of extending the selected time periods back from 1900 to 2016 and analyzing U.S. regional SSTs, the ERSSTv4 253 is used. For the centennial time scale changes over 1900–2016, warming trends in all regions are statistically significant with the 95% confidence level. U.S. regional SST warming is similar between calculations using ERSSTv4 in this report and those published by Belkin (2016), 254 suggesting confidence in these findings.

Evidence for oxygen trends arises from extensive global measurements of WOCE after 1989 and individual profiles before that. 43 The first basin-wide dissolved oxygen surveys were performed in the 1920s. 255 The confidence level is based on globally integrated O2 distributions in a variety of ocean models. Although the global mean exhibits low interannual variability, regional contrasts are large.

Major uncertainties

Uncertainties in the magnitude of ocean warming stem from the disparate measurements of ocean temperature over the last century. There is high confidence in warming trends of the upper ocean temperature from 0–700 m depth, whereas there is more uncertainty for deeper ocean depths of 700–2,000 m due to the short record of measurements from those areas. Data on warming trends at depths greater than 2,000 m are even more sparse. There are also uncertainties in the timing and reasons for particular decadal and interannual variations in ocean heat content and the contributions that different ocean basins play in the overall ocean heat uptake.

Uncertainties in ocean oxygen content (as estimated from the intermodel spread) in the global mean are moderate mainly because ocean oxygen content exhibits low interannual variability when globally averaged. Uncertainties in long-term decreases of the global averaged oxygen concentration amount to 25% in the upper 1,000 m for the 1970–1992 period and 28% for the 1993–2003 period. Remaining uncertainties relate to regional variability driven by mesoscale eddies and intrinsic climate variability such as ENSO.

Description of confidence and likelihood

There is very high confidence in measurements that show increases in the ocean heat content and warming of the ocean, based on the agreement of different methods. However, long-term data in total ocean heat uptake in the deep ocean are sparse, leading to limited knowledge of the transport of heat between and within ocean basins.

Major ocean deoxygenation is taking place in bodies of water inland, at estuaries, and in the coastal and the open ocean (high confidence). Regionally, the phenomenon is exacerbated by local changes in weather, ocean circulation, and continental inputs to the oceans.

Key Message 4: Rising Global Sea Levels

Global average sea level has risen by about 7–8 inches (16–21 cm) since 1900, with almost half this rise occurring since 1993 as oceans have warmed and land-based ice has melted (very high confidence). Relative to the year 2000, sea level is very likely to rise 1 to 4 feet (0.3 to 1.3 m) by the end of the century (medium confidence). Emerging science regarding Antarctic ice sheet stability suggests that, for higher scenarios, a rise exceeding 8 feet (2.4 m) by 2100 is physically possible, although the probability of such an extreme outcome cannot currently be assessed.

Description of evidence base

Multiple researchers, using different statistical approaches, have integrated tide gauge records to estimate global mean sea level (GMSL) rise since the late 19th century (e.g., Church and White 2006, 2011 Hay et al. 2015 Jevrejeva et al. 2009 61 , 73 , 74 , 256 ). The most recent published rate estimates are 1.2 ± 0.2 mm/year 73 or 1.5 ± 0.2 mm/year 74 over 1901–1990. Thus, these results indicate about 4–5 inches (11–14 cm) of GMSL rise from 1901 to 1990. Tide gauge analyses indicate that GMSL rose at a considerably faster rate of about 0.12 inches/year (3 mm/year) since 1993, 73 , 74 a result supported by satellite data indicating a trend of 0.13 inches/year (3.4 ± 0.4 mm/year) over 1993–2015 (update to Nerem et al. 2010 75 see also Sweet et al. 2017, 57 Figure 12.3a). These results indicate an additional GMSL rise of about 3 inches (7 cm) since 1990. Thus, total GMSL rise since 1900 is about 7–8 inches (18–21 cm).

The finding regarding the historical context of the 20th-century change is based upon Kopp et al. (2016), 58 who conducted a meta-analysis of geological regional sea level (RSL) reconstructions, spanning the last 3,000 years, from 24 locations around the world, as well as tide gauge data from 66 sites and the tide-gauge-based GMSL reconstruction of Hay et al. (2015). 73 By constructing a spatiotemporal statistical model of these datasets, they identified the common global sea level signal over the last three millennia, and its uncertainties. They found a 95% probability that the average rate of GMSL change over 1900–2000 was greater than during any preceding century in at least 2,800 years.

The lower bound of the very likely range is based on a continuation of the observed, approximately 3 mm/year rate of GMSL rise. The upper end of the very likely range is based on estimates for a higher scenario (RCP8.5) from three studies producing fully probabilistic projections across multiple RCPs. Kopp et al.(2014) 77 fused multiple sources of information accounting for the different individual process contributing to GMSL rise. Kopp et al. (2016) 58 constructed a semi-empirical sea level model calibrated to the Common Era sea level reconstruction. Mengel et al. (2016) 257 constructed a set of semi-empirical models of the different contributing processes. All three studies show negligible scenario dependence in the first half of this century but increasing in prominence in the second half of the century. A sensitivity study by Kopp et al. (2014), 77 as well as studies by Jevrejeva et al. (2014) 78 and by Jackson and Jevrejeva (2016), 258 used frameworks similar to Kopp et al. (2016) 58 but incorporated an expert elicitation study on ice sheet stability. 259 (This study was incorporated in the main results of Kopp et al. 2014 77 with adjustments for consistency with Church et al. 2013. 56 ) These studies extend the very likely range for RCP8.5 as high as 5–6 feet (160–180 cm see Kopp et al. 2014, sensitivity study Jevrejeva et al. 2014 Jackson and Jevrejeva 2016). 77 , 78 , 258

As described in Sweet et al. (2017), 57 Miller et al. (2013), 260 and Kopp et al. (2017), 77 several lines of arguments exist that support a plausible worst-case GMSL rise scenario in the range of 2.0 m to 2.7 m by 2100. Pfeffer et al. (2008) 261 constructed a “worst-case” 2.0 m scenario, based on acceleration of mass loss from Greenland, that assumed a 30 cm GMSL contribution from thermal expansion. However, Sriver et al. (2012) 262 find a physically plausible upper bound from thermal expansion exceeding 50 cm (an additional

60 cm maximum contribution by 2100 from Antarctica in Pfeffer et al. (2008) 261 could be exceeded by

30 cm, assuming the 95th percentile for Antarctic melt rate (

22 mm/year) of the Bamber and Aspinall (2013) 259 expert elicitation study is achieved by 2100 through a linear growth in melt rate. The Pfeffer et al. (2008) 261 study did not include the possibility of a net decrease in land-water storage due to groundwater withdrawal Church et al. (2013) 56 find a likely land-water storage contribution to 21st century GMSL rise of −1 cm to +11 cm. These arguments all point to the physical plausibility of GMSL rise in excess of 8 feet (240 cm).

Additional arguments come from model results examining the effects of marine ice-cliff collapse and ice-shelf hydro-fracturing on Antarctic loss rates. 80 To estimate the effect of incorporating the DeConto and Pollard (2016) 80 projections of Antarctic ice sheet melt, Kopp et al. (2017) 81 substituted the bias-corrected ensemble of DeConto and Pollard 80 into the Kopp et al. (2014) 77 framework. This elevates the projections for 2100 to 3.1–8.9 feet (93–243 cm) for RCP8.5, 1.6–5.2 feet (50–158 cm) for RCP4.5, and 0.9–3.2 feet (26–98 cm) for RCP2.6. DeConto and Pollard 80 is just one study, not designed in a manner intended to produce probabilistic projections, and so these results cannot be used to ascribe probability they do, however, support the physical plausibility of GMSL rise in excess of 8 feet.

Very likely ranges, 2030 relative to 2000 in cm (feet)

Kopp et al. (2014) 77 Kopp et al. (2016) 58 Kopp et al. (2017) 81 DP16 Mengel et al. (2016) 257
RCP8.5 (higher) 11–18 (0.4–0.6) 8–15 (0.3–0.5) 6–22 (0.2–0.7) 7–12 (0.2–0.4)
RCP4.5 (lower) 10–18 (0.3–0.6) 8–15 (0.3–0.5) 6–23 (0.2–0.8) 7–12 (0.2–0.4)
RCP2.6 (very low) 10–18 (0.3–0.6) 8–15 (0.3–0.5) 6–23 (0.2–0.8) 7–12 (0.2–0.4)

Very likely ranges, 2050 relative to 2000 in cm (feet)
Kopp et al. (2014) 77 Kopp et al. (2016) 58 Kopp et al. (2017) 81 DP16 Mengel et al. (2016) 257
RCP8.5 (higher) 21–38 (0.7–1.2) 16–34 (0.5–1.1) 17–48 (0.6–1.6) 15–28 (0.5–0.9)
RCP4.5 (lower) 18–35 (0.6–1.1) 15–31 (0.5–1.0) 14–43 (0.5–1.4) 14–25 (0.5–0.8)
RCP2.6 (very low) 18–33 (0.6–1.1) 14–29 (0.5–1.0) 12–41 (0.4–1.3) 13–23 (0.4–0.8)

Very likely ranges, 2100 relative to 2000 in cm (feet)
Kopp et al. (2014) 77 Kopp et al. (2016) 58 Kopp et al. (2017) 81 DP16 Mengel et al. (2016) 257
RCP8.5 (higher) 55–121 (1.8–4.0) 52–131 (1.7–4.3) 93–243 (3.1–8.0) 57–131 (1.9–4.3)
RCP4.5 (lower) 36–93 (1.2–3.1) 33–85 (1.1–2.8) 50–158 (1.6–5.2) 37–77 (1.2–2.5)
RCP2.6 (very low) 29–82 (1.0–2.7) 24–61 (0.8–2.0) 26–98 (0.9–3.2) 28–56 (0.9–1.8)

Major uncertainties

Uncertainties in reconstructed GMSL change relate to the sparsity of tide gauge records, particularly before the middle of the 20th century, and to different statistical approaches for estimating GMSL change from these sparse records. Uncertainties in reconstructed GMSL change before the twentieth century also relate to the sparsity of geological proxies for sea level change, the interpretation of these proxies, and the dating of these proxies. Uncertainty in attribution relates to the reconstruction of past changes and the magnitude of unforced variability.

Since NCA3, multiple different approaches have been used to generate probabilistic projections of GMSL rise, conditional upon the RCPs. These approaches are in general agreement. However, emerging results indicate that marine-based sectors of the Antarctic ice sheet are more unstable than previous modeling indicated. The rate of ice sheet mass changes remains challenging to project.

Description of confidence and likelihood

This Key Message is based upon multiple analyses of tide gauge and satellite altimetry records, on a meta-analysis of multiple geological proxies for pre-instrumental sea level change, and on both statistical and physical analyses of the human contribution to GMSL rise since 1900.

It is also based upon multiple methods for estimating the probability of future sea level change and on new modeling results regarding the stability of marine-based ice in Antarctica.

Confidence is very high in the rate of GMSL rise since 1900, based on multiple different approaches to estimating GMSL rise from tide gauges and satellite altimetry. Confidence is high in the substantial human contribution to GMSL rise since 1900, based on both statistical and physical modeling evidence. There is medium confidence that the magnitude of the observed rise since 1900 is unprecedented in the context of the previous 2,700 years, based on meta-analysis of geological proxy records.

There is very high confidence that GMSL rise over the next several decades will be at least as fast as a continuation of the historical trend over the last quarter century would indicate. There is medium confidence in the upper end of very likely ranges for 2030 and 2050. Due to possibly large ice sheet contributions, there is low confidence in the upper end of very likely ranges for 2100. Based on multiple projection methods, there is high confidence that differences between scenarios are small before 2050 but significant beyond 2050.

Key Message 5: Increasing U.S. Temperatures

Annual average temperature over the contiguous United States has increased by 1.2ºF (0.7°C) over the last few decades and by 1.8°F (1°C) relative to the beginning of the last century (very high confidence). Additional increases in annual average temperature of about 2.5°F (1.4°C) are expected over the next few decades regardless of future emissions, and increases ranging from 3°F to 12°F (1.6°–6.6°C) are expected by the end of century, depending on whether the world follows a higher or lower future scenario, with proportionally greater changes in high temperature extremes (high confidence).

Description of evidence base

The Key Message and supporting text summarize extensive evidence documented in the climate science literature. Similar statements about changes exist in other reports (e.g., NCA3, 1 Climate Change Impacts in the United States, 263 SAP 1.1: Temperature trends in the lower atmosphere). 264

Evidence for changes in U.S. climate arises from multiple analyses of data from in situ, satellite, and other records undertaken by many groups over several decades. The primary dataset for surface temperatures in the United States is nClimGrid, 85 , 152 though trends are similar in the U.S. Historical Climatology Network, the Global Historical Climatology Network, and other datasets. Several atmospheric reanalyses (e.g., 20th Century Reanalysis, Climate Forecast System Reanalysis, ERA-Interim, and Modern Era Reanalysis for Research and Applications) confirm rapid warming at the surface since 1979, and observed trends closely track the ensemble mean of the reanalyses. 265 Several recently improved satellite datasets document changes in middle tropospheric temperatures. 7 , 266 Longer-term changes are depicted using multiple paleo analyses (e.g., Trouet et al. 2013, Wahl and Smerdon 2012). 86 , 267

Evidence for changes in U.S. climate arises from multiple analyses of in situ data using widely published climate extremes indices. For the analyses presented here, the source of in situ data is the Global Historical Climatology Network–Daily dataset. 268 Changes in extremes were assessed using long-term stations with minimal missing data to avoid network-induced variability on the long-term time series. Cold wave frequency was quantified using the Cold Spell Duration Index, 269 heat wave frequency was quantified using the Warm Spell Duration Index, 269 and heat wave intensity was quantified using the Heat Wave Magnitude Index Daily. 270 Station-based index values were averaged into 4° grid boxes, which were then area-averaged into a time series for the contiguous United States. Note that a variety of other threshold and percentile-based indices were also evaluated, with consistent results (e.g., the Dust Bowl was consistently the peak period for extreme heat). Changes in record-setting temperatures were quantified, as in Meehl et al. (2016). 13

Projections are based on global model results and associated downscaled products from CMIP5 for a lower scenario (RCP4.5) and a higher scenario (RCP8.5). Model weighting is employed to refine projections for each RCP. Weighting parameters are based on model independence and skill over North America for seasonal temperature and annual extremes. The multimodel mean is based on 32 model projections that were statistically downscaled using the LOcalized Constructed Analogs technique. 247 The range is defined as the difference between the average increase in the three coolest models and the average increase in the three warmest models. All increases are significant (i.e., more than 50% of the models show a statistically significant change, and more than 67% agree on the sign of the change). 271

Major uncertainties

The primary uncertainties for surface data relate to historical changes in station location, temperature instrumentation, observing practice, and spatial sampling (particularly in areas and periods with low station density, such as the intermountain West in the early 20th century). Much research has been done to account for these issues, resulting in techniques that make adjustments at the station level to improve the homogeneity of the time series (e.g., Easterling and Peterson 1995, Menne and Williams 2009 272 , 273 ). Further, Easterling et al. (1996) 274 examined differences in area-averaged time series at various scales for homogeneity-adjusted temperature data versus non-adjusted data and found that when the area reached the scale of the NCA regions, little differences were found. Satellite records are similarly impacted by non-climatic changes such as orbital decay, diurnal sampling, and instrument calibration to target temperatures. Several uncertainties are inherent in temperature-sensitive proxies, such as dating techniques and spatial sampling.

Global climate models are subject to structural and parametric uncertainty, resulting in a range of estimates of future changes in average temperature. This is partially mitigated through the use of model weighting and pattern scaling. Furthermore, virtually every ensemble member of every model projection contains an increase in temperature by mid- and late-century. Empirical downscaling introduces additional uncertainty (e.g., with respect to stationarity).

Description of confidence and likelihood

There is very high confidence in trends since 1895, based on the instrumental record, since this is a long-term record with measurements made with relatively high precision. There is high confidence for trends that are based on surface/satellite agreement since 1979, since this is a shorter record. There is medium confidence for trends based on paleoclimate data, as this is a long record but with relatively low precision.

There is very high confidence in observed changes in average annual and seasonal temperature and observed changes in temperature extremes over the United States, as these are based upon the convergence of evidence from multiple data sources, analyses, and assessments including the instrumental record.

There is high confidence that the range of projected changes in average temperature and temperature extremes over the United States encompasses the range of likely change, based upon the convergence of evidence from basic physics, multiple model simulations, analyses, and assessments.

Key Message 6: Changing U.S. Precipitation

Annual precipitation since the beginning of the last century has increased across most of the northern and eastern United States and decreased across much of the southern and western United States. Over the coming century, significant increases are projected in winter and spring over the Northern Great Plains, the Upper Midwest, and the Northeast (medium confidence). Observed increases in the frequency and intensity of heavy precipitation events in most parts of the United States are projected to continue (high confidence). Surface soil moisture over most of the United States is likely to decrease (medium confidence), accompanied by large declines in snowpack in the western United States (high confidence) and shifts to more winter precipitation falling as rain rather than snow (medium confidence).

Description of evidence base

The Key Message and supporting text summarize extensive evidence documented in the climate science peer-reviewed literature and previous National Climate Assessments (e.g., Karl et al. 2009, Walsh et al. 2014 88 , 263 ). Evidence of long-term changes in precipitation is based on analysis of daily precipitation observations from the U.S. Cooperative Observer Network ( and shown in Easterling et al. (2017), 94 Figure 7.1. Published work, such as the Third National Climate Assessment and Figure 7.1 94 , show important regional and seasonal differences in U.S. precipitation change since 1901.

Numerous papers have been written documenting observed changes in heavy precipitation events in the United States (e.g., Kunkel et al. 2003, Groisman et al. 2004 275 , 276 ), which were cited in the Third National Climate Assessment, as well as those cited in this assessment. Although station-based analyses (e.g., Westra et al. 2013 277 ) do not show large numbers of statistically significant station-based trends, area averaging reduces the noise inherent in station-based data and produces robust increasing signals (see Easterling et al. 2017, 94 Figures 7.2 and 7.3). Evidence of long-term changes in precipitation is based on analysis of daily precipitation observations from the U.S. Cooperative Observer Network ( and shown in Easterling et al. (2017), 94 Figures 7.2, 7.3, and 7.4.

Evidence of historical changes in snow cover extent and reduction in extreme snowfall years is consistent with our understanding of the climate system’s response to increasing greenhouse gases. Furthermore, climate models continue to consistently show future declines in snowpack in the western United States. Recent model projections for the eastern United States also confirm a future shift from snowfall to rainfall during the cold season in colder portions of the central and eastern United States. Each of these changes is documented in the peer-reviewed literature and cited in the main text of this chapter.

Evidence of future change in precipitation is based on climate model projections and our understanding of the climate system’s response to increasing greenhouse gases, and on regional mechanisms behind the projected changes. In particular, Figure 7.7 in Easterling et al. (2017) 94 documents projected changes in the 20-year return period amount using the LOCA data, and Figure 7.6 94 shows changes in 2-day totals for the 5-year return period using the CMIP5 suite of models. Each figure shows robust changes in extreme precipitation events as they are defined in the figure. However, Figure 7.5 94 shows changes in seasonal and annual precipitation and shows where confidence in the changes is higher based on consistency between the models, and there are large areas where the projected change is uncertain.

Major uncertainties

The main issue that relates to uncertainty in historical trends is the sensitivity of observed precipitation trends to the spatial distribution of observing stations and to historical changes in station location, rain gauges, the local landscape, and observing practices. These issues are mitigated somewhat by new methods to produce spatial grids 152 through time.

This includes the sensitivity of observed snow changes to the spatial distribution of observing stations and to historical changes in station location, rain gauges, and observing practices, particularly for snow. Future changes in the frequency and intensity of meteorological systems causing heavy snow are less certain than temperature changes.

A key issue is how well climate models simulate precipitation, which is one of the more challenging aspects of weather and climate simulation. In particular, comparisons of model projections for total precipitation (from both CMIP3 and CMIP5 see Sun et al. 2015 271 ) by NCA3 region show a spread of responses in some regions (e.g., Southwest) such that they are opposite from the ensemble average response. The continental United States is positioned in the transition zone between expected drying in the subtropics and projected wetting in the mid- and higherlatitudes. There are some differences in the location of this transition between CMIP3 and CMIP5 models, and thus there remains uncertainty in the exact location of the transition zone.

Description of confidence and likelihood

Confidence is medium that precipitation has increased and high that heavy precipitation events have increased in the United States. Furthermore, confidence is also high that the important regional and seasonal differences in changes documented here are robust.

Based on evidence from climate model simulations and our fundamental understanding of the relationship of water vapor to temperature, confidence is high that extreme precipitation will increase in all regions of the United States. However, based on the evidence and understanding of the issues leading to uncertainties, confidence is medium that more total precipitation is projected for the northern United States and less for the Southwest.

Based on the evidence and understanding of the issues leading to uncertainties, confidence is medium that average annual precipitation has increased in the United States. Furthermore, confidence is also medium that the important regional and seasonal differences in changes documented in the text and in Figure 7.1 in Easterling et al. (2017) 94 are robust.

Given the evidence base and uncertainties, confidence is medium that snow cover extent has declined in the United States and medium that extreme snowfall years have declined in recent years. Confidence is high that western U.S. snowpack will decline in the future, and confidence is medium that a shift from snow domination to rain domination will occur in the parts of the central and eastern United States cited in the text, as well as that soil moisture in the surface (top 10cm) will decrease.

Key Message 7: Rapid Arctic Change

In the Arctic, annual average temperatures have increased more than twice as fast as the global average, accompanied by thawing permafrost and loss of sea ice and glacier mass (very high confidence). Arctic-wide glacial and sea ice loss is expected to continue by mid-century, it is very likely that the Arctic will be nearly free of sea ice in late summer (very high confidence). Permafrost is expected to continue to thaw over the coming century as well, and the carbon dioxide and methane released from thawing permafrost has the potential to amplify human-induced warming, possibly significantly (high confidence).

Description of evidence base

Annual average near-surface air temperatures across Alaska and the Arctic have increased over the last 50 years at a rate more than twice the global average. Observational studies using ground-based observing stations and satellites analyzed by multiple independent groups support this finding. The enhanced sensitivity of the arctic climate system to anthropogenic forcing is also supported by climate modeling evidence, indicating a solid grasp of the underlying physics. These multiple lines of evidence provide very high confidence of enhanced arctic warming with potentially significant impacts on coastal communities and marine ecosystems.

This aspect of the Key Message is supported by observational evidence from ground-based observing stations, satellites, and data model temperature analyses from multiple sources and independent analysis techniques. 117 , 118 , 119 , 120 , 121 , 136 , 278 For more than 40 years, climate models have predicted enhanced arctic warming, indicating a solid grasp of the underlying physics and positive feedbacks driving the accelerated arctic warming. 26 , 279 , 280 Lastly, similar statements have been made in NCA3, 1 IPCC AR5, 120 and in other arctic-specific assessments such as the Arctic Climate Impacts Assessment 281 and the Snow, Water, Ice and Permafrost in the Arctic assessment report. 129

Permafrost is thawing, becoming more discontinuous, and releasing carbon dioxide (CO2) and methane (CH4). Observational and modeling evidence indicates that permafrost has thawed and released additional CO2 and CH4, indicating that the permafrost–carbon feedback is positive, accounting for additional warming of approximately 0.08ºC to 0.50ºC on top of climate model projections. Although the magnitude and timing of the permafrost–carbon feedback are uncertain due to a range of poorly understood processes (deep soil and ice wedge processes, plant carbon uptake, dependence of uptake and emissions on vegetation and soil type, and the role of rapid permafrost thaw processes such as thermokarst), emerging science and the newest estimates continue to indicate that this feedback is more likely on the larger side of the range. Impacts of permafrost thaw and the permafrost–carbon feedback complicate our ability to limit future temperature changes by adding a currently unconstrained radiative forcing to the climate system.

This part of the Key Message is supported by observational evidence of warming permafrost temperatures and a deepening active layer, in situ gas measurements, laboratory incubation experiments of CO2 and CH4 release, and model studies. 126 , 127 , 282 , 283 , 284 , 285 Alaska and arctic permafrost characteristics have responded to increased temperatures and reduced snow cover in most regions since the 1980s, with colder permafrost warming faster than warmer permafrost. 127 , 129 , 286 Large carbon soil pools (approximately half of the global below-ground organic carbon pool) are stored in permafrost soil, 287 , 288 with the potential to be released. Thawing permafrost makes previously frozen organic matter available for microbial decomposition. In situ gas flux measurements have directly measured the release of CO2 and CH4 from arctic permafrost. 289 , 290 The specific conditions of microbial decomposition, aerobic or anaerobic, determine the relative production of CO2 and CH4. This distinction is significant as CH4 is a much more powerful greenhouse gas than CO2. 17 However, incubation studies indicate that 3.4 times more carbon is released under aerobic conditions than anaerobic conditions, leading to a 2.3 times stronger radiative forcing under aerobic conditions. 284 Combined data and modeling studies suggest that the impact of the permafrost–carbon feedback on global temperatures could amount to +0.52° ± 0.38°F (+0.29° ± 0.21°C) by 2100. 124 Chadburn et al. (2017) 291 infer the sensitivity of permafrost area to globally averaged warming to be 1.5 million square miles (4 million square km), constraining a group of climate models with the observed spatial distribution of permafrost this sensitivity is 20% higher than previous studies. Permafrost thaw is occurring faster than models predict due to poorly understood deep soil, ice wedge, and thermokarst processes. 125 , 282 , 285 , 292 Additional uncertainty stems from the surprising uptake of methane from mineral soils 293 and dependence of emissions on vegetation and soil properties. 294 The observational and modeling evidence supports the Key Message that the permafrost–carbon feedback is positive (i.e., amplifies warming).

Arctic land and sea ice loss observed in the last three decades continues, in some cases accelerating. A diverse range of observational evidence from multiple data sources and independent analysis techniques provides consistent evidence of substantial declines in arctic sea ice extent, thickness, and volume since at least 1979, mountain glacier melt over the last 50 years, and accelerating mass loss from Greenland. An array of different models and independent analyses indicate that future declines in ice across the Arctic are expected, resulting in late summers in the Arctic very likely becoming ice free by mid-century.

This final aspect of the Key Message is supported by observational evidence from multiple ground-based and satellite-based observational techniques (including passive microwave, laser and radar altimetry, and gravimetry) analyzed by independent groups using different techniques reaching similar conclusions. 127 , 128 , 131 , 136 , 257 , 295 , 296 , 297 Additionally, the U.S. Geological Survey repeat photography database shows the glacier retreat for many Alaska glaciers (Taylor et al. 2017, 122 Figure 11.4). Several independent model analysis studies using a wide array of climate models and different analysis techniques indicate that sea ice loss will continue across the Arctic, very likely resulting in late summers becoming nearly ice-free by mid-century. 26 , 147 , 149

Major uncertainties

The lack of high-quality data and the restricted spatial resolution of surface and ground temperature data over many arctic land regions, coupled with the fact that there are essentially no measurements over the Central Arctic Ocean, hampers the ability to better refine the rate of arctic warming and completely restricts our ability to quantify and detect regional trends, especially over the sea ice. Climate models generally produce an arctic warming between two to three times the global mean warming. A key uncertainty is our quantitative knowledge of the contributions from individual feedback processes in driving the accelerated arctic warming. Reducing this uncertainty will help constrain projections of future arctic warming.

A lack of observations affects not only the ability to detect trends but also to quantify a potentially significant positive feedback to climate warming: the permafrost–carbon feedback. Major uncertainties are related to deep soil and thermokarst processes, as well as the persistence or degradation of massive ice (e.g., ice wedges) and the dependence of CO2 and CH4 uptake and production on vegetation and soil properties. Uncertainties also exist in relevant soil processes during and after permafrost thaw, especially those that control unfrozen soil carbon storage and plant carbon uptake and net ecosystem exchange. Many processes with the potential to drive rapid permafrost thaw (such as thermokarst) are not included in current Earth System Models.

Key uncertainties remain in the quantification and modeling of key physical processes that contribute to the acceleration of land and sea ice melting. Climate models are unable to capture the rapid pace of observed sea and land ice melt over the last 15 years a major factor is our inability to quantify and accurately model the physical processes driving the accelerated melting. The interactions between atmospheric circulation, ice dynamics and thermodynamics, clouds, and specifically the influence on the surface energy budget are key uncertainties. Mechanisms controlling marine-terminating glacier dynamics, specifically the roles of atmospheric warming, seawater intrusions under floating ice shelves, and the penetration of surface meltwater to the glacier bed, are key uncertainties in projecting Greenland ice sheet melt.

Description of confidence and likelihood

There is very high confidence that the arctic surface and air temperatures have warmed across Alaska and the Arctic at a much faster rate than the global average is provided by the multiple datasets analyzed by multiple independent groups indicating the same conclusion. Additionally, climate models capture the enhanced warming in the Arctic, indicating a solid understanding of the underlying physical mechanisms.

There is high confidence that permafrost is thawing, becoming discontinuous, and releasing CO2 and CH4. Physically based arguments and observed increases in CO2 and CH4 emissions as permafrost thaws indicate that the feedback is positive. This confidence level is justified based on observations of rapidly changing permafrost characteristics.

There is very high confidence that arctic sea and land ice melt is accelerating and mountain glacier ice mass is declining, given the multiple observational sources and analysis techniques documented in the peer-reviewed climate science literature.

Key Message 8: Changes in Severe Storms

Human-induced change is affecting atmospheric dynamics and contributing to the poleward expansion of the tropics and the northward shift in Northern Hemisphere winter storm tracks since the 1950s (medium to high confidence). Increases in greenhouse gases and decreases in air pollution have contributed to increases in Atlantic hurricane activity since 1970 (medium confidence). In the future, Atlantic and eastern North Pacific hurricane rainfall (high confidence) and intensity (medium confidence) are projected to increase, as are the frequency and severity of landfalling “atmospheric rivers” on the West Coast (medium confidence).

Description of evidence base

The tropics have expanded poleward in each hemisphere over the period 1979–2009 (medium to high confidence) as shown by a large number of studies using a variety of metrics, observations, and reanalysis. Modeling studies and theoretical considerations illustrate that human activities like increases in greenhouse gases, ozone depletion, and anthropogenic aerosols cause a widening of the tropics. There is medium confidence that human activities have contributed to the observed poleward expansion, taking into account uncertainties in the magnitude of observed trends and a possible large contribution of natural climate variability.

The first part of the Key Message is supported by statements of the previous international IPCC AR5 assessment 120 and a large number of more recent studies that examined the magnitude of the observed tropical widening and various causes. 95 , 161 , 298 , 299 , 300 , 301 , 302 , 303 , 304 , 305 Additional evidence for an impact of greenhouse gas increases on the widening of the tropical belt and poleward shifts of the midlatitude jets is provided by the diagnosis of CMIP5 simulations. 306 , 307 There is emerging evidence for an impact of anthropogenic aerosols on the tropical expansion in the Northern Hemisphere. 308 , 309 Recent studies provide new evidence on the significance of internal variability on recent changes in the tropical width. 302 , 310 , 311

Models are generally in agreement that tropical cyclones will be more intense and have higher precipitation rates, at least in most basins. Given the agreement among models and support of theory and mechanistic understanding, there is medium to high confidence in the overall projection, although there is some limitation on confidence levels due to the lack of a supporting detectable anthropogenic contribution to tropical cyclone intensities or precipitation rates.

The second part of the Key Message is also based on extensive evidence documented in the climate science literature and is similar to statements made in previous national (NCA3) 1 and international 249 assessments. Since these assessments, more recent downscaling studies have further supported these assessments (e.g., Knutson et al. 2015 170 ), though pointing out that the changes (future increased intensity and tropical cyclone precipitation rates) may not occur in all basins.

Increases in atmospheric river frequency and intensity are expected along the U.S. West Coast, leading to the likelihood of more frequent flooding conditions, with uncertainties remaining in the details of the spatial structure of these systems along the coast (for example, northern vs. southern California). Evidence for the expectation of an increase in the frequency and severity of landfalling atmospheric rivers on the U.S. West Coast comes from the CMIP-based climate change projection studies of Dettinger (2011). 163 Warner et al. (2015), 164 Payne and Magnusdottir (2015), 312 Gao et al. (2015), 165 Radić et al. (2015), 313 and Hagos et al. (2016). 314 The close connection between atmospheric rivers and water availability and flooding is based on the present-day observation studies of Guan et al. (2010), 315 Dettinger (2011), 163 Ralph et al. (2006), 316 Neiman et al. (2011), 317 Moore et al. (2012), 318 and Dettinger (2013). 319

Major uncertainties

The rate of observed expansion of the tropics depends on which metric is used. 161 The linkages between different metrics are not fully explored. Uncertainties also result from the utilization of reanalysis to determine trends and from limited observational records of free atmosphere circulation, precipitation, and evaporation. The dynamical mechanisms behind changes in the width of the tropical belt (e.g., tropical–extratropical interactions, baroclinic eddies) are not fully understood. There is also a limited understanding of how various climate forcings, such as anthropogenic aerosols, affect the width of the tropics. The coarse horizontal and vertical resolution of global climate models may limit the ability of these models to properly resolve latitudinal changes in the atmospheric circulation. Limited observational records affect the ability to accurately estimate the contribution of natural decadal to multi-decadal variability on observed expansion of the tropics.

A key uncertainty in tropical cyclones (TCs) is the lack of a supporting detectable anthropogenic signal in the historical data to add further confidence to these projections. As such, confidence in the projections is based on agreement among different modeling studies and physical understanding (for example, potential intensity theory for TC intensities and the expectation of stronger moisture convergence, and thus higher precipitation rates, in TCs in a warmer environment containing greater amounts of environmental atmospheric moisture). Additional uncertainty stems from uncertainty in both the projected pattern and magnitude of future SST. 170

In terms of atmospheric rivers (ARs), a modest uncertainty remains in the lack of a supporting detectable anthropogenic signal in the historical data to add further confidence to these projections. However, the overall increase in ARs projected/expected is based to a very large degree on very high confidence that the atmospheric water vapor will increase. Thus, increasing water vapor coupled with little projected change in wind structure/intensity still indicates increases in the frequency/intensity of ARs. A modest uncertainty arises in quantifying the expected change at a regional level (for example, northern Oregon versus southern Oregon), given that there are some changes expected in the position of the jet stream that might influence the degree of increase for different locations along the west coast. Uncertainty in the projections of the number and intensity of ARs is introduced by uncertainties in the models’ ability to represent ARs and their interactions with climate.

Description of confidence and likelihood

There is medium to high confidence that the tropics and related features of the global circulation have expanded poleward is based upon the results of a large number of observational studies, using a wide variety of metrics and datasets, which reach similar conclusions. A large number of studies utilizing modeling of different complexity and theoretical considerations provide compounding evidence that human activities like increases in greenhouse gases, ozone depletion, and anthropogenic aerosols contributed to the observed poleward expansion of the tropics. Climate models forced with these anthropogenic drivers cannot explain the observed magnitude of tropical expansion, and some studies suggest a possibly large contribution of internal variability. These multiple lines of evidence lead to the conclusion of medium confidence that human activities contributed to observed expansion of the tropics.

Confidence is rated as high in tropical cyclone rainfall projections and medium in intensity projections since there are a number of publications supporting these overall conclusions, fairly well-established theory, general consistency among different studies, varying methods used in studies, and still a fairly strong consensus among studies. However, a limiting factor for confidence in the results is the lack of a supporting detectable anthropogenic contribution in observed tropical cyclone data.

There is low to medium confidence for increased occurrence of the most intense tropical cyclones for most basins, as there are relatively few formal studies focused on these changes, and the change in occurrence of such storms would be enhanced by increased intensities but reduced by decreased overall frequency of tropical cyclones.

Confidence in this finding on atmospheric rivers is rated as medium based on qualitatively similar projections among different studies.

Key Message 9: Increases in Coastal Flooding

Regional changes in sea level rise and coastal flooding are not evenly distributed across the United States ocean circulation changes, sinking land, and Antarctic ice melt will result in greater-than-average sea level rise for the Northeast and western Gulf of Mexico under lower scenarios and most of the U.S. coastline other than Alaska under higher scenarios (very high confidence). Since the 1960s, sea level rise has already increased the frequency of high tide flooding by a factor of 5 to 10 for several U.S. coastal communities. The frequency, depth, and extent of tidal flooding is expected to continue to increase in the future (high confidence), as is the more severe flooding associated with coastal storms, such as hurricanes and nor’easters (low confidence).

Description of evidence base

The part of the Key Message regarding the existence of geographic variability is based upon a broader observational, modeling, and theoretical literature. The specific differences are based upon the scenarios described by the Federal Interagency Sea Level Rise Task Force. 76 The processes that cause geographic variability in regional sea level (RSL) change are also reviewed by Kopp et al. (2015). 320 Long tide gauge datasets reveal where RSL rise is largely driven by vertical land motion due to glacio-isostatic adjustment and fluid withdrawal along many U.S. coastlines. 321 , 322 These observations are corroborated by glacio-isostatic adjustment models, by global positioning satellite (GPS) observations, and by geological data (e.g., Engelhart and Horton 2012 323 ). The physics of the gravitational, rotational, and flexural “static-equilibrium fingerprint” response of sea level to redistribution of mass from land ice to the oceans is well-established. 324 , 325 GCM studies indicate the potential for a Gulf Stream contribution to sea level rise in the U.S. Northeast. 326 , 327 Kopp et al. (2014) 77 and Slangen et al. (2014) 59 accounted for land motion (only glacial isostatic adjustment for Slangen et al.), fingerprint, and ocean dynamic responses. Comparing projections of local RSL change and GMSL change in these studies indicates that local rise is likely to be greater than the global average along the U.S. Atlantic and Gulf Coasts and less than the global average in most of the Pacific Northwest. Sea level rise projections in this report were developed by a Federal Interagency Sea Level Rise Task Force. 76

The frequency, extent, and depth of extreme event-driven (e.g., 5- to 100-year event probabilities) coastal flooding relative to existing infrastructure will continue to increase in the future as local RSL rises. 57 , 76 , 77 , 328 , 329 , 330 , 331 , 332 , 333 These projections are based on modeling studies of future hurricane characteristics and associated increases in major storm surge risk amplification. Extreme flood probabilities will increase regardless of changes in storm characteristics, which may exacerbate such changes. Model-based projections of tropical storms and related major storm surges within the North Atlantic mostly agree that intensities and frequencies of the most intense storms will increase this century. 190 , 334 , 335 , 336 , 337 However, the projection of increased hurricane intensity is more robust across models than the projection of increased frequency of the most intense storms. A number of models project a decrease in the overall number of tropical storms and hurricanes in the North Atlantic, although high-resolution models generally project increased mean hurricane intensity (e.g., Knutson et al. 2013 190 ). In addition, there is model evidence for a change in tropical cyclone tracks in warm years that minimizes the increase in landfalling hurricanes in the U.S. mid-Atlantic or Northeast. 338

Major uncertainties

Since NCA3, 1 multiple authors have produced global or regional studies synthesizing the major process that causes global and local sea level change to diverge. The largest sources of uncertainty in the geographic variability of sea level change are ocean dynamic sea level change and, for those regions where sea level fingerprints for Greenland and Antarctica differ from the global mean in different directions, the relative contributions of these two sources to projected sea level change.

Uncertainties remain large with respect to the precise change in future risk of a major coastal impact at a specific location from changes in the most intense tropical cyclone characteristics and tracks beyond changes imposed from local sea level rise.

Description of confidence and likelihood

Because of the enumerated physical processes, there is very high confidence that RSL change will vary across U.S. coastlines. There is high confidence in the likely differences of RSL change from GMSL change under different levels of GMSL change, based on projections incorporating the different relevant processes. There is low confidence that the flood risk at specific locations will be amplified from a major tropical storm this century.

Key Message 10: Long-Term Changes

The climate change resulting from human-caused emissions of carbon dioxide will persist for decades to millennia. Self-reinforcing cycles within the climate system have the potential to accelerate human-induced change and even shift Earth’s climate system into new states that are very different from those experienced in the recent past. Future changes outside the range projected by climate models cannot be ruled out (very high confidence), and due to their systematic tendency to underestimate temperature change during past warm periods, models may be more likely to underestimate than to overestimate long-term future change (medium confidence).

Description of evidence base

This Key Message is based on a large body of scientific literature recently summarized by Lenton et al. (2008), 197 NRC (2013), 339 and Kopp et al. (2016). 198 As NRC (2013) 339 states, “A study of Earth’s climate history suggests the inevitability of ‘tipping points’—thresholds beyond which major and rapid changes occur when crossed—that lead to abrupt changes in the climate system” and “Can all tipping points be foreseen? Probably not. Some will have no precursors, or may be triggered by naturally occurring variability in the climate system. Some will be difficult to detect, clearly visible only after they have been crossed and an abrupt change becomes inevitable.” As IPCC AR5 WG1 Chapter 12, Section 12.5.5 26 further states, “A number of components or phenomena within the Earth system have been proposed as potentially possessing critical thresholds (sometimes referred to as tipping points) beyond which abrupt or nonlinear transitions to a different state ensues.” Collins et al. (2013) 26 further summarize critical thresholds that can be modeled and others that can only be identified.

This Key Message is also based on the conclusions of IPCC AR5 WG1, 249 specifically Chapter 7 196 the state of the art of global models is briefly summarized in Hayhoe et al. (2017). 24 This Key Message is also based upon the tendency of global climate models to underestimate, relative to geological reconstructions, the magnitude of both long-term global mean warming and the amplification of warming at high latitudes in past warm climates (e.g., Salzmann et al. 2013, Goldner et al. 2014, Caballeo and Huber 2013, Lunt et al. 2012 199 , 201 , 340 , 341 ).

Major uncertainties

The largest uncertainties are 1) whether proposed tipping elements actually undergo critical transitions, 2) the magnitude and timing of forcing that will be required to initiate critical transitions in tipping elements, 3) the speed of the transition once it has been triggered, 4) the characteristics of the new state that results from such transition, and 5) the potential for new positive feedbacks and tipping elements to exist that are yet unknown.

The largest uncertainties in models are structural: are the models including all the important components and relationships necessary to model the feedbacks and, if so, are these correctly represented in the models?

Description of confidence and likelihood

There is very high confidence in the likelihood of the existence of positive feedbacks and tipping elements based on a large body of literature published over the last 25 years that draws from basic physics, observations, paleoclimate data, and modeling.

There is very high confidence that some feedbacks can be quantified, others are known but cannot be quantified, and others may yet exist that are currently unknown.

There is very high confidence that the models are incomplete representations of the real world and there is medium confidence that their tendency is to under- rather than overestimate the amount of long-term future change.