Assessing an Election's Quality with a PVT/Quick Count
In many election observations the final vote count attracts the most attention. This is entirely understandable. The vote count determines election day winners and losers, and the integrity of that count is a longstanding concern in many countries. The final count, however, is just one aspect of an election. No one doubts that an accurate, honest vote count is a necessary condition for a democratic election, but it is not a sufficient condition. Electoral outcomes too often have been rigged in ways that have little or nothing to do with the counting and tabulation of results. The will of the electorate has been nullified for example by: blocking legitimate candidates and parties from appearing on the ballot; otherwise tilting electoral laws and regulations; financing campaigns illicitly, including through the improper use of state resources; preventing open and free campaigns; intimidating and bribing voters; using biased voter registration lists; interfering with the secrecy of the vote; manipulating the administration of the election and complaint mechanisms; and preventing legitimate winners from assuming office.
For these reasons, election observers must concentrate on the quality of the electoral process before, during and after election day, and contemporary election observations should not depend on just impressionistic evidence of anecdotes. To be effective and credible, contemporary election observations should not depend on just impressionistic evidence or anecdotes. Anecdotal or impressionistic evidence is unreliable, and it leaves too many important questions unanswered. Qualitative problems in the process should be quantified as much as possible so that their impact can be characterized appropriately. For example, if unused ballots have been tampered with then there is surely a cause for concern. But the more important questions include: How widespread was this problem? Did the tampering work in favor of one party to the detriment of others? Was the tampering part of a larger scheme aimed at interfering with the outcome of the election? The only sure way to answer these important questions is to collect reliable and systematic information from well trained observers.
This chapter is divided into two parts and provides basic guidelines for designing the qualitative component of the election-day observation. To collect qualitative data, observers use standardized forms, and the place to begin is with the design of these forms. What should observers try to measure? What questions should be included? And what principles should be followed to make sure that the questions included on forms will produce reliable and useful evidence? What are the most common mistakes, and how can these be avoided? These issues are illustrated with a discussion of observer forms that have been used in the field. The second part of the chapter discusses a variety of strategies that can be used to analyze the qualitative results.
Two preliminary points need to be emphasized regarding the qualitative component of an election observation. The first is that the general methodology driving the qualitative evaluation of elections through observer reports is exactly the same as the methodology that underpins the generation of the vote count data for the quick count. The qualitative reports come from the same observers and from the same polling stations used for the retrieval of vote count data. Recall that these polling stations are sample points that are determined by random selection. This means that the qualitative data gathered from observers have the same statistical properties as the vote count data; the findings of the qualitative analysis of sample data can be reliably generalized to the quality of the entire election-day process throughout the country. The same margins of error also apply. Because of these characteristics, the qualitative data provide a systematic way of evaluating election day processes on a national basis.1
The second primary point to emphasize is: there is no such thing as an election that is completely free of error. Nor does the fact that errors have been made necessarily mean that fraud has taken place. Nationwide elections are complicated events to plan and administer. Election-day mistakes are made everywhere. In the vast majority of cases, these mistakes are simply a matter of human error. A polling official may get sick and fail to report to the polling station on election morning. As a result, a polling station may end up being short of the proper number of officials. Materials might have been misplaced or inadvertently sent to the wrong polling station. A polling station might not
open on time because someone forgot to tell a supervisor that a building has to be unlocked early on Sunday morning so that officials can set up. Because national elections are difficult to organize you can expect that some things might go wrong on election day.
The important point is that most of these kinds of errors qualify as unintentional human error. In societies where corrupt practices have plagued elections for decades, people understandably tend to view any irregularities on election day with a great deal of suspicion. It is a mistake, however, to leap to the conclusion that each and every election day problem necessarily indicates that there has been a fraudulent attempt to fix an election. Such human errors are usually random; they do not conform to any particular pattern. Moreover, random error usually means that the “mistakes” do not end up favoring any one political party or any one candidate running for office.
Because the qualitative observation data rely on exactly the same statistical principles as those used to generate the quick count vote data, analysts using the qualitative data have the tools to determine whether “errors” found in the analysis of qualitative data at the national level are random or whether they are systematic. There are strong reasons to worry about evidence of systematic patterns of “errors.” Random problems should certainly be reported, but the more important task for analysts is to determine what are the consequences of non-random problems. It is possible, for example, that analysis will show that a disproportionate number of problems that disenfranchise voters occurred in areas that are traditional opposition strongholds and/or problems that indicate multiple voting occurred in ruling party strongholds, at an incidence that could affect the outcome of elections. On the other hand, analysis could demonstrate that the problems do not follow a politically discriminatory pattern or that the incidence is minimal.
DESIGNING OBSERVATION FORMS
The goal of the qualitative part of the quick count observation is to provide a systematic and reliable evaluation of important aspects of the electoral process. But any effective evaluation needs benchmarks against which behavior can be evaluated. Administrative rules for elections usually set out in detail exactly how things are supposed to work at each polling station on election day, and these rules usually set out clear guidelines that cover the selection and duties of polling station personnel. These rules and administrative guidelines establish what are the acceptable procedures for the administration of the polling station. Typically, they specify what materials are required at polling stations, they provide instructions for polling station personnel and they set out procedures for dealing with anomalies. Electoral authorities issue these procedures based on the law—they should also should seek public input and broad political agreement. Domestic observation organizations might find that the official rules are incomplete, arbitrary or in some way fall short of desirable standards. If so, observers should point out these problems in a report. However, when it comes to the design of the qualitative observation forms, the place to start is with the rules established by electoral authorities. These rules are public and they define what are the officially acceptable, or unacceptable, standards for the election-day operations of polling stations.
How Long Should the Forms Be?
When election observer groups first try to decide precisely what qualitative issues they want to evaluate, they often produce a vast list of questions about election day procedures for observers to answer. Undoubtedly, a vast number of “important” questions could be asked about the quality of any electoral process, but it is not possible for practical reasons both to ask every single possible question and to have timely and useful data. The problem is one of resource constraints; tough choices have to be made.
The most important constraint on election day is time. The more data observers are asked to collect, the more time it takes to collect the data, transmit the information, enter the data into computer files and analyze the it. For an observation to maximize its impact, observer groups have to be able to gather key pieces of information quickly, analyze the data quickly and interpret and release the data quickly. Citizens want to know whether the election is “going well” or “going badly” on election day. They usually want to know about whether the polls opened “on time,” for instance, before the polls have closed. Because time is vital, the qualitative reporting forms have to be short. That said, the next challenge is to decide which qualitative questions are the most important of all. Once decisions have been made about what needs to be evaluated and measured, the next matter is to decide the best way of going about constructing the measure.
There is no single list of qualitative questions that work equally well for every election in all countries. And it is useful to invest some time thinking about what particular issues might be uniquely relevant for a particular election. For example, if there has been recent experience with military intervention in election day procedures, and opposition parties and others express concern that these experiences might be repeated, then there are good reasons to consider including questions about the role of the military, or the police, on the qualitative observation forms. If there are reasons to believe that proper voter identification cards have not been universally distributed, or that the election day registration of voters will be problematic, then questions about these issues should be included in the qualitative observation forms.
How many questions should qualitative observation forms contain? There is no hard and fast rule, but most experienced election observation groups usually end up using qualitative observation forms that contain somewhere between 12 –15 questions. Experience shows that election day qualitative reports rarely use data from more than 8 of those 12-15 questions. At issue is a practical matter: It is simply not possible to collect, transmit, digitally enter and analyze more than 15 qualitative observation questions to report in a timely way on election-day processes. If data cannot be used, then why collect it?
The Do’s and Don’ts of Question Design
Designing the content of the observation forms (the questions) is an important task that requires patient and careful attention to detail. Past practice suggests that the best way to go about designing the questions is to recruit a small team of people who can work together. That team needs to be able to identify what are the 12-15 most important qualitative questions for observers to ask, and they need to be aware of some key factors that will guide them to make informed decisions about what is the best way to ask these questions. For that reason, members of the team have to have some expertise.
Typically, the volunteer coordinator takes the lead in designing forms. She or he works with several additional individuals, including:
- The executive director or a board member—Knowledge and judgement about the political environment is needed to be sure that questions address the likely key problems in election-day procedures, such as disenfranchisement or illegal voting based on voter lists, ballot box stuffing, crediting votes to the wrong candidate, etc. Therefore, the executive director, a board member, or other such person must help to design the forms.
- An electoral law expert—Because questions aim to evaluate the quality of election day processes, the team needs to include someone who is knowledgeable about how election day processes are supposed to work. This means including someone on the team who knows the details of the electoral law and regulations.
- The lead trainer—Observers must be “trained to the forms.” That is, trainers have to explain to observers the details about exactly how the forms are supposed to be used. This team member has to be able to think about the structure and content of the form from the point of view of the observer and to anticipate how the structure and content of the forms shape the training of observers.
- A data analyst—Someone responsible for analyzing data on election day must be on the team to consider methodological issues of question construction, the practical challenges of data transmission and data entry, as well as the interpretive challenges of how the data will be configured and used on election day.
With the team in place, the next task is to work together to make the detailed decisions about precisely how each question will be formulated. Cumulative experience with qualitative form construction and measurement suggests some useful rules to follow. In effect, each and every proposed question should be able to pass a series of “tests.” These can be summarized as follows:
- The usefulness test—For each proposed question, the analyst should be able to specify first, why it is critical to have that particular piece of information quickly, and second, precisely how the data from that question will be used in the analysis. If there is no compelling reason for having the information quickly, or if it is not clear exactly how the data from the question will be used, then the question should not be asked.
- The validity test—Recall that validity refers to how well an indicator, the data produced by answers to questions on the form, actually measures the underlying concept to be measured. Here, the question that needs a clear answer is: Exactly what concept is being measured by the question? And, is there a better, more direct, or clearer way to formulate the question to measure that concept?
- The reliability test—Reliability has to do with the consistency of the measurement. The goal is to reduce the variation in responses between observers, that is, to have independent observers watching the same event record that event in exactly the same way. When questions are worded ambiguously observers are more likely to end up recording different results when independently measuring the same event. Note that validity and reliability are the most serious sources of non-sampling error plaguing systematic observation data.
- The response categories test—Response categories for questions have to satisfy two minimal conditions. First, the response categories should be exhaustive. This means that the structure of the response categories should collectively cover all of the possible meaningful ranges of responses. Second, response categories have to be mutually exclusive. That is, the range of values in one response category should not overlap with those of other categories.
- The efficiency test—Response categories should be designed to achieve the maximum efficiency by keeping the number of response categories to a minimum. This has a significant impact on the volume of data that are being transmitted. The fewer the number of response categories used in a form, the faster and more accurately the data can be transmitted. Furthermore, fewer key strokes are required to enter the data into the computerized dataset.
What to Avoid
Lessons from past experience also suggest that some practices should be avoided. These include:
- Open-ended questions—When designing observation forms it is very tempting to want to include a few open-ended questions. For example, if observers record the fact that the police might have intervened in election day activities at a particular polling station, then it is natural to want to know the details of what exactly happened. But the qualitative short forms are not the best places to record this information; details of incidents that could have a significant impact on the electoral process should be gathered on separate forms. Answers to open-ended qualitative questions might well produce “interesting findings,” but these kinds of data are cumbersome. Uncategorized answers to open-ended questions are a type of “anecdotal evidence,” and to be of any analytic help these kinds of answers have to be re-coded into useful categories. The problem is that it is very time consuming to recode such data. For all practical purposes it is too difficult to both categorize and analyze these data within very tight time constraints.
- False precision—Analysts want to work with precise results, but attempting to achieve very high levels of precision is seldom warranted. Extra precision usually involves collecting more data, which increases the load on observers and communications systems. It also requires more time to enter data that, in most cases, do not provide a substantive payoff when it comes to the basic interpretation of the evidence. Consider the following example related to the opening of polling stations:
- We want to know at what time the first voter cast a ballot at a particular polling station, so we ask the observer to record the exact time, say 8:02 am. That may be the most precise result; however, that level of precision is unnecessary. Moreover, this specification of the question introduces time consuming complications for both data entry and analysis. Suppose five polling stations opened at the following times: 6:37; 9:58; 7:42; 11:59 and 12:10. To determine the average opening time involves arithmetically summing all these times and then dividing them by the number of observations, five. Simple computational systems operate in units of 1, 10, 100 and so on. The problem is that the standard clock does not; there are 60 minutes in an hour, not 10 or 100, and there are 24 hours in a day, not 10 or 100. Computing simple averages, therefore, produces a figure that makes no sense and is actually incorrect. It is possible of course to write an algorithm that “translates” standard clock time into standardized units, and then translate those standardized units back into standard time. However, that practice is awkward, time consuming and it involves unnecessary extra work. At the end of the day what we really need to know is: What proportion of all polling stations opened “on time”? What proportions were “late” or “very late?” And how many, and which, polling stations did not open at all?
Observation Forms: An Example
How these design principles help to produce efficient, usable questions that satisfy the usefulness, validity, reliability, measurement and efficiency tests is illustrated in the forms presented in Figure 6-1.2, 3
The content of Form 1 covers six areas. The first part, the code and the polling station, are identification numbers. The “code” refers to the security code number assigned to each observer. Using such a code makes it far more difficult for outsiders to break into the observation system, or to interfere with the observation. Data entry personnel are trained not to enter any data from callers who do not supply the correct code number. The code number and the polling station number have to match those contained in the database. After the correct codes are supplied, the reported data from Form 1 are entered into the master database.
The first substantive question identifies the time of installation of the polling station. The second set of questions indicate which polling station personnel were present at the installation and whether they were the appointed officials or substitutes. The third block of questions is a checklist for reporting the presence or absence of required voting materials, and the fourth block collects data on whether proper installation procedures were followed. The fifth section identifies which party agents were present at the polling station and the final part indicates what time voting began.
The application of the principles of question design can be most easily illustrated by working through an example:
- Suppose observers want to know whether polling stations opened on time on election day. One possibility is to simply construct a question as in version A.
- Version A: “Did the polling station you were observing open on time on election morning?” "Yes” "No”
But there are several problems with this wording of the question. First, observers will almost certainly have in their minds different ideas about just when a polling station is in fact “open.” Is a polling station “open” when the election officials are all present? Is it “open” when all of the election officials and party agents are present and after all of the materials have been set out? Or, is a polling station “open” at the moment that the first voter casts their ballot? Moreover, we need to be very clear about what “on time” means? If a polling station is supposed to be “open” at 6:00 am and the first voter casts a ballot at 6:25, has the polling station actually “opened on time?”
Variations in how these concepts are understood pose problems of validity and reliability. If observers have in mind different views about what “on time” means, and it is left up to observers to decide what “on time” means, then the observers will produce unreliable measures. Version B of the same question is both a more valid and more reliable way to ask the very same question.
- Version B: “When did the first voter cast a ballot at the polling station?” "Before 7:00” "Between 7:00 and 8:00” "After 8:00” "Did not open”
This particular version of the question has several advantages:
- First, this question wording reduces any ambiguity about the question of when a polling station actually “opens,” and it provides a clear guideline to observers for what qualifies as “on time.” There is no conceptual ambiguity, and so there is validity.
- Second, because the response categories are varied across time, analysts can examine the distribution of “opening times” that will reveal the scale and scope of administration problems in getting polling stations “open.“ These categories allow responses to vary in meaningful ways; the “usefulness test” is satisfied. Also, the measurement categories are clear; there is no room for observers to provide their own interpretation of what is “late” or “early.” Consequently, the measurement will be reliable. Note too that the response categories in version B of the question satisfy both of the measurement rules: the categories are exhaustive and mutually exclusive.
- Third, this version of the question also supplies us with an important additional piece of information; it tells us which polling stations did not open at all.
There is a caveat to the above example: The concern about late opening of polling stations is not simply a gauge of administrative organization. It is also an indicator of whether prospective voters had a genuine opportunity to vote. Late openings do not measure whether anyone was disenfranchised as a consequence of the problem. An observer outside the polling station determining how many people left lines due to long waits might better measure that. Even that indicator does not address whether those persons returned later. These are the types of issues to discuss when designing an observation and its forms.
ANALYZING QUALITATIVE DATA
Analyzing data within very short time constraints is no easy task. Data analysts usually have to begin to prepare for the job well in advance of election day by:
- gathering contextual information;
- developing a clear election-day plan;
- creating a software “shell” for the presentation of graphics; and
- establishing a working protocol for management of results produced by the analysis team.
Pre-Election Preparation
During the run-up to elections, analysts gather different kinds of contextual information that will help them to interpret the qualitative data.
Contextual Data
Typically, the most useful contextual data to gather are those from previous elections (when available), especially from the election immediately preceding the present observation. For example, consider the case of voter turnout. Voter turnout indicates levels of citizen participation on election day and citizen participation is an important measure of the health of an election process. But how do you know if voter turnout is “high” or “unusually low?” At least two kinds of benchmarks are helpful for making these kinds of evaluations. The most obvious benchmark comes from documentation of the recent electoral history of the country. Was voter turnout in the present election “unusually low” when compared with levels of voter turnout in the previous election, or with other national elections in the recent past? International benchmark comparisons might also be helpful, but these comparisons have to be made cautiously because electoral rules have significant effects on levels of voter turnout. Voter turnout is typically systematically higher in countries using proportional representation than in majoritarian electoral systems. Any international comparisons have to take such factors as electoral rules into account. Prior elections can also provide useful benchmark data for interpreting whether the number of challenged ballots or other anomalies were “unusually high.” Most election commissions keep records of prior elections, and those records should be publicly available.
Pre-election preparation also involves gathering data from international organizations that conduct election observations. These organizations may have participated in observer missions, or they may have assisted domestic nongovernmental organizations conducting observations in the country. Some of these organizations keep records of previous involvement, and their archived files on other elections can provide important detailed contextual election data.
A Clear Plan
It is essential that analysts develop in advance a clear plan addressing: Exactly how will they work with the observer data when they start to arrive on election day? Which parts of the dataset will be examined first? In what order will the data be analyzed? Do the analysts know exactly how to proceed if findings indicate that there may have been some problems? Which are the problems that seem most likely to arise on election day? How will they be analyzed? These questions must not be left until election day, and they should be discussed in advance with those responsible for presenting the results to the public. The point is to eliminate as many “surprises” as possible.
Using Graphics
Next, analysts must plan how they will use graphics. Graphic presentations of data make observation results more accessible to the media and to the public. In many cases newspapers will simply print the graphic results produced by observer groups. The production of user friendly graphics solves two problems. It saves newspapers the trouble of producing their own graphics, and it reduces the chances that errors will be made in the presentation of findings.
The production of graphics is time consuming, and it is remarkable just how much disagreement can arise over the matter of what is the best way to present information. Just as the leadership of the organization should prepare in advance drafts of what an election day statement of results might look like, so too should the analysis team prepare ahead of time the software “shell” for the presentation of graphics. That “shell” should reflect choices about format, addressing issues, like: Will the data on key questions be illustrated with bar charts? Will they be presented using pie-charts? Or, will they be numeric tables? Will the charts include the organization’s logo? How will each of the graphs or tables be labelled?
These questions may seem trivial, but it is essential to eliminate in advance as many things as possible that may cause election-day disagreements and lost time. Such disagreements have delayed press conferences, and they have led to missed media opportunities. Advanced preparation avoids such problems. More importantly, they save time on election day and eliminate possibilities of making mistakes that can damage the credibility of the election observers.
Establishing an Election Day Protocol
Analysts should also prepare for election day by establishing a working protocol for the management of results produced by the analysis team. This protocol can significantly reduce the potential for election day friction with quick count leadership and mistakes like forcing premature release of data. The protocol should clearly address the following questions: How, when, and to whom will the analysts report the results of the analysis on election day? These issues need to be discussed and agreed upon prior to election day.4
The political leadership of civic organizations does not always understand precisely what is entailed in the analysis of election day observation data, and they have expectations that are sometimes unfounded. Furthermore, there are extraordinary pressures surrounding election day.
Quick count organizers are under external pressure to release results as quickly as possible. The pressures can come from multiple sources, including: the media, international observer groups, representatives of donor countries, political parties, and even the election commission. The constraint facing the analyst is that it takes time for data to arrive and be entered before they can be analyzed. Moreover, analysts need to have enough data to undertake a reliable analysis. If leadership bows to pressures and makes premature pronouncements, they may be inaccurate and produce extraordinarily negative consequences.
Steps in the Analysis of the Qualitative Data
On election day, the analysis of the qualitative data usually proceeds through three discrete steps:
- Scanning the data—Identifying “outliers,” signs that something has gone wrong.
- Searching for systematic patterns—Determining whether problems are randomly distributed or clustered
- Ascertaining the impact of the problems—Determining whether problems have a material impact on the outcome and favor any particular party or candidate.
Scanning the Data
The analysis of the qualitative data usually begins with a scan of the data and an analysis of the distribution of the responses to each and every question in the qualitative dataset. The task here is to identify “outliers,” those responses that signify that something might have gone wrong. Recall that all the questions were drafted, and informed in large part by, the election law and administrative regulations governing election day procedures. Consequently, the responses to each question will indicate whether those regulations have been satisfied or some problem is detected.
Consider the case of responses to Question 1 in Form 1 above. The response categories to the question about “installation of the polling station” allow for four responses. The distribution of responses across the first three categories indicates what amounts to the “rate” of installation. In well-run elections the expectation would be that the majority of polling stations should be installed before 7:00 a.m. if the polls are to open to the public at 7:00 a.m. If a large proportion of polling stations were installed between 7:00 a.m. and 9:00, then these would be “late” but not necessarily problematic, depending on whether there are still ample opportunities for everyone at those polling stations to vote and the absence of other problems. Far more problematic are those cases where observers report that the polling station was “not installed.” In those cases, significant numbers of voters may be disenfranchised unless extraordinary remedies are set in place by authorities. These cases will require further investigation by the analyst.
Analysts should report the distribution of responses across all categories, identify precisely which polling stations were “not installed” and attach the list of non-installed polling stations to the report of the distribution of installation times. The reason for attaching to the report case-by-case identification of each polling station not installed becomes clear through experience. When reporting to the public that, say 4 percent of the polling stations were “not installed,” the media typically ask two questions: which ones? and why were they not-installed? The first question can be addressed by supplying the attached report. The second question may be harder to answer in the initial report, but the reply should at least be: “We are investigating the matter.” Local knowledge might reveal that the polling station was not installed because it had very few voters registered there and it was merged with a polling station at the next table, a polling station that also had very few registered voters. As long as all voters had a real opportunity to vote, there is no reason to assert that the problem was sufficient to compromise the fairness of the election. Contextual data collected prior to election day also is important. With these contextual data it becomes possible to say whether levels of non-installations are higher or lower than in previous elections.
The same procedure should be followed for each and every question. Consider another case. Questions 6a-6f on Form 1 above have to do with the presence of materials at the polling station. Most election laws require that all of these materials be in place. The analyst, therefore, should scan the data to search for any cases that do not satisfy these criteria. Those cases should be identified. The same applies to the responses to Question 10 about the time of the first vote. If a response to the first vote question is “never,” the observer recorded that no one voted, then this indicates a serious problem at the polling station. The next step takes the analysis further.
Searching for Systematic Patterns
Step 1 procedures will indicate if anything has gone wrong, where it has gone wrong and what is the potential scope of the problem. Step 2 is essentially a search for systematic patterns. It begins by a statistical search for patterns of regularities, or irregularities, for those cases that step 1 analysis has identified as “problem cases.” Recall that if the problem cases are distributed randomly and the scale is not large, then the likely cause of the problems is simple human error. However, this has to be determined systematically, and there are two ways to proceed. What needs to be determined, first, is whether the problem cases are clustered in any one region of the country or not. This can be established by cross-tabulating all of the problem cases by region of the country and within region, by district.
If the problem cases are clustered, say in the capital city, or in a particular region, then the reasons behind this should be explored. A clustering of problem cases may signify an administrative problem within a particular district. In those cases, it is useful to alert the emergency team about the problem and to contact the observer groups’ regional or municipal supervisors to generate local information about why these problems arose. Regional or municipal supervisors are usually in the best position to get to the bottom of a localized problem—not least of all because they will be in contact both with the local observers and the local election commission officials.
While these local inquiries are being initiated, analysts should continue to analyze the data by cross-tabulating the problem cases with all other response to questions in the qualitative forms. That strategy is important because it can shed light on the shape and depth of the problems with these cases. For example, if the polling station was “not installed” (Question 1, response E) then it should follow that people should not have been able to vote (Question 10, response E). A simple cross-tabulation of these two sets of question can establish definitively whether this was the case.
These cross-tabulation checks will also enable the analyst to determine if most of the problems across most categories are concentrated within the same polling stations, or if they are not. This is a critical line of investigation. Once again, an example helps to illustrate the point. If the analysts takes the problem cases where polling stations were “not installed” (Question 1, response E) and crosstabulates these with the responses to Questions 2-4, and Questions 6a-6f which concern the presence of polling station officials and election materials, then the results will allow the analysts to rule out, or isolate, certain reasons for why the polling stations were not installed. So if, for the majority of cases of non-installed polling stations, the analyst finds that the answer to questions 6a-6f is uniformly “no” (the materials were not present), but the answers to Questions 2-4 were “A” (all nominated polling station officials were present), then the analyst would conclude that the problem of non-installation was not the absence of polling station officials, but probably was the absence of proper election materials. Such a finding should be communicated to the observer group’s regional coordinator who can be asked to investigate why materials did not arrive at these polling stations.
The analysis might reveal an administrative problem, as with the above example. These findings should form a part of the observer groups’ report. Alternatively, information from a local coordinator may reveal that the polling stations that were “not installed” are not really a problem at all. The polling station might not have been installed for sensible administrative reasons. Local knowledge might reveal that the polling station was not installed because it had very few voters registered there and it was merged with the next polling station, one that also had very few registered voters. As long as all voters had a real opportunity to vote, there is no reason to assert that there was a problem.
However, the observer group’s municipal coordinator may determine that materials (or, for example, ballots) were not delivered to the polling station in the quick count sample nor to any other polling stations in the surrounding area. Analysis of past voting patterns may reveal that voters in this area tend to favor a particular political party. This could indicate a deliberate political discrimination affecting a local election, or it could turn out to be part of a national trend.
In the interpretation of the qualitative evidence, therefore, the analyst should be prepared to combine local information with information that comes from the qualitative dataset.
Determining the Impact of Problems
In Step 3, analysts determine the impact of “the problems.” At issue is the question: Does the scope and scale of the problems identified in Steps 1 and 2 have a systematic and/or material impact on any particular political party or candidate?
The data from the qualitative reports are a part of the same dataset as the data reported for the quick count. Because there are both qualitative and vote count data merged in the same dataset, it is possible to determine whether qualitative problems are related in systematic ways to vote count results. The crosstabulation of qualitative results with vote count results can incorporate items from either Form 1 or Form 2. The basic logic can be illustrated with a simple example.
Transparency is an essential characteristic of democratic elections, and the electoral rules allowing party agents to be present at polling stations are intended help to ensure transparency. The theory is that party agents from competing parties will serve as checks on the transparency of polling station procedures, including the counting process. Most elections feature at least two major parties with a reasonable chance to win national office, but some parties are better organized than others. All parties may be entitled to have party agents present at all polling stations, but not all parties will necessarily have the organizational capacity to place party agents in each and every polling station to watch the vote count. A vote count might qualify as “transparent” at any particular polling station when party agents representing at least two different and competing political parties are present and can actually observe ballots being removed from the ballot box, the determination of for whom they should be counted and the recording of the results.
By combining the qualitative data with the numeric quick count data, it is possible to evaluate the issue of transparency systematically. Questions 6a-6c on Form 2 above and Questions 9a-9c on Form 1 indicate which party agents were present at which polling stations. And Questions 9a-9f on Form 2 indicate vote results. Using the qualitative data, analysts can identify precisely, first, which polling stations had fewer than two party agents present and also identify what was the vote count result from that polling station.
Following this approach makes it possible to determine the answer to important questions: Did vote counts at polling stations with fewer than two party agents have vote results that were systematically different from the results from polling stations where there were two or more party agents present? Did presidential candidate A, systematically win more votes in those polling stations where an agent from party A was the only party agent present? If the answers to those questions is “yes,” then the data should be probed further. One possible reason for that finding might simply be that Party A is stronger in that region of the country. That outcome, then, does not necessarily mean that fraud has taken place. The data should be further analyzed, however, to determine whether the same finding holds for polling stations in the same region/district where there are two or more party agents present at polling stations. Further, analysis will be able to determine: 1) just how many polling stations in the sample had fewer than two party agents present; 2) what is the size of the vote “dividend” (if any) to Party A where Party A agents are the only party agents present; and 3) whether the size of that “dividend” could have had any impact on the overall outcome of the election.5
The general point concerning how to use the combination of the qualitative results and the count results is made using the case of “transparency.” Exactly the same kind of combined analysis could be used with a number of other combinations. For example, analysts can examine the impact of irregularities on vote count results (Form 2, Question 2). The very same principle applies when a party contests the results from a polling station (Form 2, Question 10). In that case, it can be systematically determined whether all, or most, challenges were issued by the party in second place.6
*All content is pulled from NDI’s “The Quick Count and Election Observation”, and more details on this section can be found here.
1 Quick counts discussed in this handbook most often concern national elections (e.g., presidential elections and elections for proportional representation by national political party lists). The data collected for such quick counts is highly reliable for evaluating national developments but will not necessarily be able to assess processes and results at the sub-national level, such as elections for single-member legislative districts or local elections – unless the quick count is specifically designed to do so.
2 These forms reflect the best elements of forms used in a number of countries, most especially Peru and Nicaragua. The original Nicaraguan forms are contained in Appendices 9A and 9B; the Nicaraguan forms include instructions to the quick count volunteers.
3 These forms are not intended to present a definitive list of questions. They must always be adapted somewhat to meet the conditions in each election, and there are some questions that may be considered for inclusion in any election. For example, groups may consider placing a question at the end of the form asking the observer whether the results at her or his assigned polling station should be "accepted" or should be "challenged," or the observer may be asked to rate the overall process at the polling station on a scale from one to five (with one being "excellent," two being "good” three being "neutral," four being "bad," and five being "unacceptable").
4 See Chapter 8: The “End Game” for a discussion of developing and following a protocol for sharing internally and releasing quick count results.
5 Here the size of the sample is very important. If a national sample is small, with corresponding relatively large margins of error, it will not be possible to conduct this type of analysis with a significant degree of confidence, and certain problems could even not be detected.
6 The qualitative data provide a sound basis upon which to draw inferences about the severity of identified problems or the importance of the absence of significant problems. However, groups must use caution when speaking publicly about problems identified and the likely impact on the overall quality of election-day processes. Statements or reports should be carefully crafted so the significance of the qualitative data is not over-extended. For additional information on public statements, see Chapter Eight, The “End Game.”