Ryan Claasen and Quin Monson of BYU have a recent article in the Journal of Political Science Education that tests the impact of a series of civic education exercises in two large introductory American politics classes.
What makes the paper particularly nice are these features that provide much greater leverage on the question of impact:
- An identical manipulation was implemented at two large research universities, but ones with different local political cultures, one conservative and religious (BYU) the second more liberal and secular (Kent State)
- They used experimental methods, assigning students to a “writing lab” that was, for a randomly selected portion, political blogging
- They compared political behavior among the groups 6, 12, and 18 months after the classes
The impact of the class manipulation was modest, with only minor movements in reported voter turnout. More encouraging, however, was the impact of the blogging exercise on student engagement with politics writ large–the bloggers reported higher levels of news consumption and information about politics well after the class.
Despite consensus regarding the civic shortcomings of American citizens, no such scholarly consensus exists regarding the effectiveness of civic education addressing political apathy and ignorance. Accordingly, we report the results of a detailed study of students enrolled in introductory American politics courses on the campuses of two large research universities. The study provides pre- and postmeasures for a broad range of political attitudes and behaviors and includes additional long-term observations in survey waves fielded 6, 12, and 18 months after the conclusion of the class. Long-term observation provides leverage absent in many prior studies and enables us to compare the changes we observe during the semester to those that take place beyond the confines of the classroom and during important political events, such as the 2012 presidential election. Also embedded in the study is an experiment designed to assess whether students’ enthusiasm for “new media” (e.g., blogs) can be harnessed in American politics courses to stimulate long-lasting political engagement. We find evidence that civic education matters for some, but not all, measures of political engagement. Moreover, we find evidence that what one does in the classroom also matters. For some dimensions of political engagement, this study finds evidence of lasting civic education effects and the experimental manipulation compellingly locates the source of some engagement variation in the classroom.
New article by Damien Bol in Party Politics examines when political parties support changes to the electoral formula in their country. Bol implicitly compares a model where parties support a reform strictly because they think it will increase their share of seats in Parliament vs. reforms that benefit (or harm) social groups assumed to support the party’s platform. (I think it’s a bit misleading to call this latter source of support “values” as Bol does in the titlebut later changes to “policy”.)
Regular readers of this blog may be confused by the title of the piece–“reform” refers only to changes in the proportionality formula–but the paper is an interesting treatment nonetheless.
It is often taken for granted that parties support electoral reform because they anticipate seat payoffs from the psychological and mechanical effects of the new electoral system. Although some studies point out that elements related to values and the willingness to achieve social goals are also relevant to explaining party preference in those situations, a general model of how these considerations influence support for electoral reform is still missing. To fill this gap, I develop in this article a policy-seeking model accounting for values-related factors and operationalize it using one of the most firmly established effects of electoral systems in the literature: The degree of inclusiveness and its consequences for the representation of social groups in parliament. The empirical relevance of this model is then tested using an original dataset reporting the actual position of 115 parties facing 22 electoral reform proposals in OECD countries since 1961. The results show that willingness to favour the electoral system most in line with a party’s electoral platform has a unique explanatory power over party support for a more proportional electoral system. In turn, values appear to be as crucial as party self-interest in explaining the overall electoral reform story.
Senate bill 8582, introduced 11/18/2015 in the New York legislature, would provide for early in-person voting in the Empire State.
I’ve talked to NY state legislators before, but not about this legislation. It does contain some useful provisions that I often recommend, including:
- A population based floor (but no ceiling) on the number of early voting locations
- Allows for early voting “vote centers” in the City of New York (not county based)
- An early voting period that includes two weekends and requires some Saturday and Sunday voting, and requires at least one early voting location in each county to stay open until 8 in the evening
Early voting locations are also subject to other location provisions, assuring that not just numbers, but accessibility will be taken into account:
POLLING PLACES FOR EARLY VOTING SHALL BE LOCATED TO ENSURE, TO THE 11 EXTENT PRACTICABLE, THAT ELIGIBLE VOTERS HAVE ADEQUATE EQUITABLE ACCESS, 12 TAKING INTO CONSIDERATION POPULATION DENSITY, TRAVEL TIME TO THE POLLING 13 PLACE, PROXIMITY TO OTHER LOCATIONS OR COMMONLY USED TRANSPORTATION 14 ROUTES AND SUCH OTHER FACTORS THE BOARD OF ELECTIONS OF THE COUNTY OR 15 THE CITY OF NEW YORK DEEMS APPROPRIATE.
Another excellent report by Michael Waldman of the Brennan Center. Even if you don’t agree with their position on some of these legal changes, they maintain some of the best resources for election laws and procedures.
An excellent new podcast as part of Rick Hasen’s Election Law Blog (ELB) series features Prof. Nathan Persily addressing the question “can the Supreme Court handle social science?” Persily addresses the question in light of recent litigation over campaign finance and voter identification.
Persily is well-known in the election reform community; for the broader political science community, Persily received his PhD in Political Science from Berkeley, his JD from Stanford, and served as research director for the Presidential Commission on Election Administration. Many may be familiar with him from his recent edited volume on Cambridge Solutions to Political Polarization in America.
Any political scientist who is interested in how the Court and the legal community views our scholarship, and more generally in how social science can be made more comprehensible and impactful in the policy community, would do well to listen to this short 30 minute podcast.
A quick link to Peter Miller and my paper on public opinion and torture. not pertinent to Early Voting but this lets us get to our presentation.
The Election Assistance Commission’s Election Administration and Voting Survey has been released. This is the first in a series of posts that will highlight some patterns and anomalies in the data.
The EAVS is one of the best ways to assess whether or not a state is adhering to the requirements of the National Voter Registration Act of 1993, which obligates states, among other things, to provide the option to register to vote via motor vehicle agencies and other social service agencies.
To assess compliance, however, the data need to be reported. I have shown below a table that reports the state by state totals from three variables in the EAVS that should in principle have the same value:
- QA5a – “The total number of registration forms received by your jurisdiction”
- QA6_Total” – “Registration forms received, broken down by source”, the reported figure should be a sum of the individual sources, but is also labeled explicitly on the questionnaire as “QA5a”, alerting the jurisdiction that that total here should match the total listed above.
- Regtotal – My own calculated total of registration forms from all sources.
The data are reported by state below. As you can see, there are only eleven states where all three figures match as they are supposed to: AL, CO, CT, LA, ME, MI, MN, NC, NH, OR, and WY. ND is not required to report this information. These states get an “A+” for reporting.
Wisconsin simply forgot to enter the “total” for QA6_Total, but the numbers match. We’ll give them an “A”.
Idaho, New Jersey, and South Dakota reported nothing for the NVRA section at all. Not sure they can get a grade other than “F”.
I haven’t probed the other differences in order to give more nuanced grades. I’ll leave that to other experts.