USGS - science for a changing world

South Florida Information Access (SOFIA)

projects > optimal control strategies for invasive exotics in south florida > work plan

Project Work Plan

Department of Interior USGS GE PES
Fiscal Year 2012 Study Work Plan

PROJECT TITLE: Optimal Control Strategies for Invasive Exotics in South Florida
PRINCIPAL INVESTIGATOR CONTACT INFORMATION: Southeast Ecological Science Center, U.S. Geological Survey, 7920 NW 71 Street, Gainesville, FL 32653. Ph: (352) 264-3488; E-mail:

STATEMENT OF PROBLEM: The establishment and proliferation of exotic plants and animals can interfere with native ecological processes and can cause severe stress to sensitive ecosystems. Perhaps nowhere in the contiguous U.S. is this more evident than in South Florida, where millions of dollars are spent annually to monitor and control the spread of exotics such as Brazilian pepper (Schinus terebinthifolius), melaleuca (Melaleuca quinquenervia), the Mexican bromeliad weevil (Metamasius callizona), and Burmese python (Python molurus bivittatus), just to name a few. However, agencies with responsibility for protecting native ecosystems in South Florida have limited resources with which to control the spread of invasive exotics. Therefore, there is a pressing need to develop decision-support tools for supporting cost-effective management.

OBJECTIVES: This study proposal thus has the following objectives:

METHODS: Within the constraints of their budgets, responsible agencies must routinely make tradeoffs inherent in controlling the spread of invasives; e.g., monitoring abundance in well-established areas vs. monitoring potential sites for colonization, eradicating large infestations vs. eradicating newly colonized sites, and monitoring populations vs. implementing control measures. There are also temporal tradeoffs that must be considered because decisions made now produce a legacy for the future (e.g., how long to wait before implementing controls). These tradeoffs can be investigated formally within the context of a decision-theoretic framework, which can identify optimal actions based on management goals and constraints, available budgets, and the demography of the invasive population. A key advantage of a decision-theoretic framework is the ability to make optimal decisions in the face of various sources and degrees of uncertainty, such as the rate at which an invasive will colonize new areas or the variable effectiveness of control measures. The product of this approach is a state-dependent management strategy that prescribes an optimal action for each time period for each possible state of the system. In this case, the state of the system would be characterized by extant knowledge of the spatial distribution and abundance of the target invasive. The state-dependent strategy can also be adaptive, as predicted and observed system responses are compared over time.

Development of a decision-theoretic framework involves the specification of (1) unambiguous management objectives and constraints; (2) a set of available options regarding monitoring and control activities; (3) one or more models predicting the response of the invasive population to management activities and uncontrolled environmental factors; and (4) a monitoring program to direct state-dependent actions and to assess management performance. Thus, this effort would require the active involvement and engagement of both researchers and managers. Experience suggests that a prototype framework for a target species can be developed using existing information within a matter of months rather than years. Once the components of the decision problem are specified, the best management strategy can be derived using computing algorithms for Markov decision processes. The prototype can then be refined based on the results and the needs, desires, and perspectives of managers.

Suppose we have a system comprised of patches (or “grid cells”) with known infestations and those that have the potential for infestation but where none has been observed. Assume that the invasives manager is able to conduct reconnaissance surveys at periodic intervals to identify infested patches. Assume also that the manager has some idea of the probability of detecting an infested patch given it is infested. This detection probability may be estimated directly using an appropriate survey design, based on previous surveys, or simply represent the manager’s best guess. We may expect the detection probability to be low for cryptic species like Burmese pythons and relatively high for sessile plants like Brazilian pepper. The number of infested patches observed in combination with the detection probability provides information about the number of infested patches that were missed by the reconnaissance survey. Following each survey, the manager can choose to (a) do nothing until time for the next reconnaissance, (b) attempt control of the infested patches that were detected, or (3) re-survey apparently empty patches and control whatever infestations are found (“search and destroy”). Each action likely has a different cost, with doing-nothing having the lowest cost and search-and-destroy having the highest cost. The goal of the manager is to choose a single action after each reconnaissance survey that would be expected to minimize the number of infested patches over time (or, equivalently, maximize the number of empty patches) at the lowest possible cost. One way to express these competing objectives is in a loss function, in which the total loss is the sum of direct costs of management activities and the opportunity costs of infested patches. Opportunity costs represent the ecosystem values forgone by allowing an infested patch to persist.1 Thus, at each decision point the manager desires a strategy prescribing an action a ∈ A for each possible state of the system X that satisfies:

loss function equation

for a given budget. Note that this function implies a potential tradeoff between current and future losses. For example, while current loss can be minimized by doing nothing, we might expect future losses to increase as a result of more infestations. And while future losses might be minimized by controlling infested patches now, this will increase the current loss. This tradeoff between short and long-term values is an inherent feature of sequential decision processes, in which myopic decisions are likely to erode management performance over the long-term.

To solve the decision problem, we require the probability of observing nt + 1 infected patches at time t + 1, conditioned on nt observed infestations, the assumed detection probability dt, and the action taken at at time t. Following the notation of Williams et al. (2002, Analysis and Management of Animal Populations, Academic Press):

decision problem equation

where N is the total number of (infested and un-infested) patches and Ψ is the conditional probability that a patch is infested. Thus, the true state of the patches at time t depends on the observed number of infestations and the detection probability. The transition of the system from time t to t + 1 depends on the true state of the patches and the action taken at time t. And the expected number of observed infestations at time t + 1 depends on the true state of the system and the detection probability of the reconnaissance survey at time t + 1. The detection probability at time t + 1 might be assumed to be constant (dt+1 = dt = d), vary randomly (e.g., (dt+1~beta(a,b)), or be predictable (e.g., based on observer expertise). The transition probability for infested patches p(NΨt+1|NΨt,at) depends on the rates of colonization of empty patches and local extinction of infested patches, which in turn depend on the management action taken. The Hamilton-Jacobian-Bellman equation for solving the decision problem then is:

Hamilton-Jacobian-Bellman equation

where R is the current loss and V* is the minimum future loss arising from a decision made in the present. This structuring of the problem characterizes a partially observable Markov decision process (POMDP), which in many cases can be analytically intractable. If, however, we are willing to assume that detection probability is known without error, so that:

decision probability equation and standard Markov decision process equation

then we convert the problem into a standard Markov decision process (MDP), which can be solved readily using dynamic programming. Moreover, we can go one step further and allow d to take on a range of possible but discrete values, each with an assigned probability mass (whose sum is 1.0). This in turn produces discrete probability distributions for f with hook and accented f with hook, which can also be used in a standard MDP.

We can envision a number of generalizations to the example provided above, including:



RELEVANCE AND BENEFITS: This study supports the Ecosystems Mission Area, Wildlife: Terrestrial and Endangered Resources, adaptive management. With the number of established exotic species now numbering well into the hundreds in South Florida, the potential impact of invasives has emerged as a high-priority issue in planning the restoration and conservation of the Greater Everglades (South Florida Environmental Report, 2011, South Florida Water Management District).

COMMUNICATION PLAN, TECHNOLOGY and INFORMATION TRANSFER: This study principally involves technology transfer in the form of MatLab computer code to solve for optimal strategies in the face of incomplete survey information. Training in the use of this software will be provided to appropriate agency staff.

PERSONNEL: Dr. Fred A. Johnson


QUALIFICATIONS of STUDY PERSONNEL: Fred A. Johnson, Ph.D., Research Wildlife Biologist –principal interest is in the application of decision science to problems in natural resource management. He is particularly active in migratory bird management, with experience in problems of recreational and subsistence harvest, pest control, and habitat management. Dr. Johnson has 30 years of experience integrating research and management to improve wildlife conservation, having worked for the Florida Game & Fresh Water Fish Commission (1981-1989), the U.S. Fish & Wildlife Service (1989-2007), and the U.S. Geological Survey (2007-present).




1 Direct and opportunity costs need not be placed on a currency scale as long as the relative costs can be specified. For example, suppose that the direct cost of treating an infestation is set to unity; then the opportunity cost of not treating a infestation might be assumed to be something like 1.5, suggesting that the loss of ecosystem value of an infested patch is 50% higher than the direct cost of treating the infestation.