Effective control of infectious diseases depends on two closely linked capabilities: the ability to accurately determine the disease status of individual animals or herds, and the ability to systematically monitor populations over time to determine how the disease epidemiology is changing. Diagnostic testing provides information about infection or immunity at the animal level, while surveillance systems aggregate information across animals, herds, regions, or countries to support early detection, control programmes, and claims of disease freedom.
Most diagnostic tests for infectious diseases fall into one of two broad categories:
Choosing the appropriate test requires an understanding of disease pathogenesis: how infection progresses over time from exposure through to clinical disease, recovery, latency, or long-term carrier states. Without this context, test results can easily be misinterpreted, particularly when tests are applied at the wrong stage of infection or in inappropriate populations.
Pathogen detection tests aim to identify the presence of the infectious agent itself and generally indicate that the animal is currently infected. Common examples include:
The sensitivity of these tests is strongly influenced by disease pathogenesis. For some infections, such as bovine tuberculosis, Johne’s disease, or feline leukaemia virus, pathogens may enter latent or low-shedding states. In these cases, an infected animal may test negative simply because the pathogen is not present in sufficient quantities in the sampled tissue at the time of testing.
False negative results may also occur when testing is performed too soon after exposure, before the pathogen has had time to replicate to detectable levels.
Serological tests detect antibodies produced in response to infection and are usually performed on serum or whole blood. Because antibody production takes time, these tests are generally poor indicators of very recent exposure.
A positive antibody test result can reflect several different biological situations:
The animal has been infected and mounted an immune response. For transient infections such as bovine viral diarrhoea virus (BVD), this may indicate past exposure and clearance. For chronic infections such as Johne’s disease, it may indicate ongoing infection.
Many tests cannot distinguish vaccine-induced antibodies from those produced after natural infection. This limitation has led to the development of DIVA tests and vaccines (Differentiating Infected from Vaccinated Animals), which are designed to overcome this problem.
Young animals may test positive due to antibodies acquired through colostrum. These antibodies can persist for weeks to months, depending on the pathogen and the level of passive transfer.
Antibodies may react with closely related pathogens. For example, BVD antibody tests may also detect antibodies to border disease virus, and tests for Mycobacterium bovis may cross-react with Mycobacterium avium subsp. paratuberculosis.
Occasionally, test results are positive due to non-specific reactions unrelated to true exposure.
For diseases where antibodies persist long after infection, serology alone may provide little information about when infection occurred. In some situations, paired samples collected 2–4 weeks apart are used to look for rising antibody titres, which can indicate recent or ongoing infection.
Surveillance is the continuous and systematic collection, analysis, interpretation, and dissemination of health-related data to support decision-making and action. Monitoring is a broader, less intensive activity focused on routine data collection, whereas surveillance is typically designed to trigger specific responses when defined thresholds are exceeded.
Surveillance systems support multiple functions, including:
Surveillance systems can be classified according to how data are collected, how proactive the system is, and the purpose it is designed to serve. In practice, most national and regional animal-health surveillance programmes combine several of the system types described below, each contributing different strengths.
Passive surveillance relies on routine reporting of disease events as they are observed in the course of normal activities. Reports may come from farmers, veterinarians, diagnostic laboratories, abattoirs, or regulatory agencies, depending on the disease and legal framework.
This approach is relatively inexpensive and can cover large populations over long periods. It is well suited to detecting obvious or severe disease events and forms the backbone of most notifiable disease systems. However, passive surveillance is highly dependent on awareness, motivation, and compliance among reporters. Under-reporting is common, particularly for endemic or subclinical diseases, and reporting patterns may change over time due to economic pressures, regulatory concerns, or shifts in diagnostic behaviour.
Enhanced passive surveillance aims to strengthen passive systems without moving fully to active data collection. This is achieved by deliberately improving the sensitivity and quality of routine reporting.
Common enhancement strategies include education and awareness campaigns, clear and standardised case definitions, simplified reporting processes, targeted reminders, feedback to reporters, and in some cases financial or non-financial incentives. Enhanced passive surveillance is often used during periods of heightened concern, such as increased risk of exotic disease incursion, and can substantially improve early detection while remaining relatively cost-efficient.
Active surveillance involves the deliberate and systematic collection of data, regardless of whether disease is suspected. This may include structured surveys, targeted testing programmes, farm visits, or scheduled inspections designed specifically for surveillance purposes.
Because data collection is planned and controlled, active surveillance is generally more sensitive and more representative of the target population than passive approaches. It is particularly important for demonstrating disease freedom, estimating prevalence, and evaluating control programmes. The main limitation is cost, as active surveillance requires dedicated resources, logistics, and analytical capacity.
Sentinel surveillance uses selected herds, flocks, clinics, or geographic locations as early warning systems. These sentinel units are chosen because they are at higher risk of exposure, strategically located, or able to provide timely and high-quality data.
Sentinel systems can provide rapid detection of emerging problems and are often used for exotic diseases, vector-borne diseases, or conditions with strong seasonal patterns. While sentinel surveillance is efficient, it does not provide population-wide estimates unless combined with other surveillance components, and careful selection of sentinel units is essential to avoid biased conclusions.
Syndromic surveillance focuses on patterns of clinical signs, production changes, or other non-specific indicators rather than confirmed diagnoses. Examples include increases in abortions, unexpected mortality, drops in milk production, or clusters of similar clinical presentations.
Because it does not require laboratory confirmation, syndromic surveillance can detect unusual events earlier than traditional diagnostic-based systems. It is particularly valuable for early warning and situational awareness but typically requires follow-up investigation to determine the underlying cause. Interpretation can be challenging, as changes may reflect non-infectious factors such as weather, management changes, or market pressures.
Slaughterhouse and laboratory surveillance use data generated through routine meat inspection and diagnostic testing to monitor disease trends. Abattoir surveillance can be especially informative for endemic conditions with lesions that persist to slaughter, while laboratory submissions provide detailed diagnostic information across a range of diseases.
These systems are often highly cost-effective because they leverage existing data streams. However, they are subject to important biases. Slaughterhouse surveillance excludes animals that die or are culled earlier, and laboratory data reflect testing behaviour rather than true disease occurrence. Careful interpretation, consistent case definitions, and awareness of changes in submission patterns are essential when using these data for surveillance purposes.
Surveillance programmes are often described as either conventional or risk-based, but in reality these represent ends of a design spectrum rather than mutually exclusive categories. Both approaches share the same underlying goal: to provide reliable information that supports disease control, early detection, or claims of disease freedom. The distinction lies in how surveillance effort is allocated across the population.
Conventional surveillance systems are built around the principle of representativeness. Sampling is designed so that each unit in the target population has a known, and often equal, probability of being selected. This allows results to be extrapolated to the wider population using standard statistical methods.
Key features of conventional surveillance include clearly defined target populations, random or systematically representative selection of regions, farms, or animals, and pre-specified sample sizes based on assumed prevalence, test performance, and desired confidence levels. Analytical approaches are well established and widely taught, and the resulting outputs are generally straightforward to interpret and communicate.
Because of this methodological transparency, conventional surveillance is well accepted by regulatory authorities and is commonly used for national reporting, international trade assurances, and long-standing control programmes. It performs well when disease prevalence is moderate to high, when resources are sufficient, and when broad population-level estimates are required.
However, conventional surveillance can become inefficient when disease is rare, highly clustered, or unevenly distributed across the population. Large sample sizes may be required to achieve adequate confidence, leading to high costs and low information yield per unit of effort, particularly in mature control programmes or post-eradication settings.
Risk-based surveillance reallocates surveillance effort according to risk, rather than distributing it evenly across the population. It explicitly recognises that not all hazards are equally important, and not all sub-populations contribute equally to the likelihood of disease detection.
The term “risk” in this context is used in two complementary ways. First, risk assessment is used to prioritise hazards based on their likelihood of occurrence and the severity of their consequences for animal health, public health, trade, or welfare. Second, risk information is used to identify sub-groups within the population where the probability of infection is higher, such as particular regions, species, production systems, age groups, or management practices.
In practice, this means that surveillance is deliberately targeted towards places and animals where it is most likely to detect disease if it is present. Diagnostic tests, sampling frequency, and sample sizes may be adapted to reflect the risk profile of different strata within the population.
The primary aim of risk-based surveillance is not to reduce sensitivity, but to achieve equivalent or greater sensitivity than conventional approaches while improving cost-effectiveness. This is done by concentrating resources where they are most informative and by integrating epidemiological, biological, and economic evidence into surveillance design and evaluation.
Risk-based surveillance has been successfully applied in a range of settings, including Trichinella surveillance in Danish pig production, Salmonella control programmes in pork industries, and targeted surveillance for exotic diseases such as foot-and-mouth disease embedded within existing monitoring systems.
Conventional surveillance offers broad coverage, methodological simplicity, and strong regulatory acceptance. It is particularly valuable when population-wide estimates are needed, when disease distribution is relatively uniform, or when surveillance outputs must be easily communicated to non-technical audiences. Its main limitations are cost and inefficiency in low-prevalence or highly heterogeneous systems.
Risk-based surveillance typically delivers a higher cost–benefit ratio and is especially efficient for rare diseases, emerging threats, and situations where resources are constrained. However, it relies heavily on the availability and quality of risk information, requires more complex design and analysis, and may face challenges in acceptance if the rationale for targeting is not clearly articulated and validated.
In practice, most contemporary animal health surveillance programmes combine elements of both approaches. Conventional surveillance often provides baseline coverage, representativeness, and regulatory confidence, while risk-based components enhance sensitivity and efficiency by focusing effort on higher-risk hazards and sub-populations.
Designing effective surveillance therefore involves deciding not whether to use conventional or risk-based methods, but how to integrate them in a way that best meets the programme’s objectives within available resources.
A good surveillance programme is built by working through a consistent set of design steps. Most of these steps are the same whether you are using conventional (more representative) sampling or a risk-based (more targeted) approach. The difference is simply whether you use risk information to prioritise where, when, and who to sample.
Step 1. Define the objective
Start by stating exactly what the programme needs to achieve, because the objective determines everything else.
Common objectives include:
Be specific about the population of interest (species and geography), and the decision the surveillance output will support (e.g., “trigger an investigation if detected” vs “estimate prevalence annually”).
Step 2. Select the hazard to monitor
Define the hazard clearly. This could be a specific pathogen (virus, bacterium, parasite), a syndrome (e.g., abortion storms), or a defined disease outcome.
If multiple hazards are possible, prioritise based on impact and feasibility. Where available, use risk assessment thinking to weigh:
Step 3. Specify the case definition
Write a case definition that is practical and consistent with the diagnostics you can access.
This usually has layers, such as:
Clear case definitions are essential for consistent data capture and for interpreting trends over time.
Step 4. Choose test procedures and the testing algorithm
Select diagnostic methods that match the stage of infection you are trying to detect and the type of inference you want to make.
Key decisions include:
If you are designing for disease freedom, test sensitivity becomes especially important because it determines how many animals you need to test to achieve a chosen level of confidence.
Step 5. Define the target population and sampling frame
Specify exactly who could be sampled, and what list or system you will sample from (the sampling frame).
Work through the levels explicitly:
If you are using a risk-based design, this is the step where you define the risk factors you will use to prioritise parts of the population (e.g., border regions, high-movement herds, certain age groups, specific management systems). If you are using a more conventional design, this is where you define how you will obtain a representative sample (often via random selection).
Step 6. Set the timing and sampling interval
Decide when sampling will occur and how often it will repeat.
Base this on:
For some objectives (like early detection), frequent sampling in selected places may be more valuable than infrequent sampling spread widely.
Step 7. Plan the analysis and define what “success” looks like
Decide in advance how results will be analysed and what the outputs will be.
Examples include:
Make sure the analysis plan matches the objective. If the objective is early detection, the key outcome might be “time to detection” rather than a precise prevalence estimate.
Step 8. Define reporting and communication pathways
Plan how results will be communicated, to whom, and how quickly.
At a minimum, specify:
Communication design matters because surveillance only has value if it drives timely decisions.
Step 9. Predefine actions for positive findings
Before surveillance begins, document what will happen if positives are detected.
This should include:
Having this agreed in advance prevents delay and confusion during an event.
Step 10. Build feedback and quality assurance into the system
Surveillance quality depends on the people collecting and submitting data. Build a feedback loop so participants know what is happening and why their contribution matters.
Quality assurance should cover:
In risk-based programmes, feedback is also used to update the risk picture and refine targeting over time.
Freedom from disease is not an absolute statement. It is a probabilistic conclusion based on surveillance evidence and explicit assumptions. In practice, surveillance is designed to answer the question: if the disease were present at or above a specified design prevalence, how likely is it that the surveillance system would have detected it?
For this reason, any claim of disease freedom must be framed in terms of:
Without this context, freedom claims are difficult to interpret or defend.
Strong claims of disease freedom are rarely based on a single data source. Instead, they typically draw on a combination of surveillance components, such as:
Together, these elements form a surveillance system rather than a one-off survey.
The reliability of disease freedom claims depends heavily on diagnostic test performance, particularly test sensitivity. If infected animals are unlikely to be detected by the test being used, confidence in freedom will be limited regardless of sample size.
Sampling strategies must also reflect how infection would be distributed if present. In structured populations such as herds or flocks, infection is often clustered, meaning that surveillance must account for group-level rather than purely individual-level detection.
Risk-based surveillance is especially valuable for demonstrating disease freedom once prevalence has been reduced to very low levels. At this stage, conventional random sampling can require very large sample sizes to maintain confidence, making it costly and inefficient.
Risk-based approaches improve efficiency by:
These approaches are not intended to weaken freedom claims, but to maintain or improve sensitivity while using resources more efficiently.
Confidence in disease freedom is not static. It declines as time passes and new opportunities for disease introduction arise, such as animal movements, imports, or contact with wildlife reservoirs.
For this reason, freedom from disease should be viewed as a status that is maintained through ongoing surveillance rather than a permanent outcome. Follow-up surveillance is designed to restore confidence by accounting for both previous evidence and the assessed risk of introduction since the last evaluation.
Clear communication is essential when presenting disease freedom claims. Surveillance outputs should always state:
Framing disease freedom in this way supports transparency, maintains stakeholder trust, and enables informed decision-making.