our contributions with respect to a robotic search application.
Most closely related to the first contribution of this work
are the results presented in [11] and [7] which show that
under a general matroid constraint the greedy algorithm
yields a constant approximation factor (1 −β)/(1 + (1 −β))
for normalized, monotone, but not necessarily submodular
objective functions where βis equivalent to the inverse gener-
alized curvature given in Definition 5 of our work. Our work
extends the bounds of [11], [7] by further considering the
generalized curvature given in Definition 4. Our contribution
given as Theorem 1 has a form very similar to that of the
bound presented in [12]. In contrast to [12] where the authors
consider the sum of submodular and supermodular func-
tions and utilize notions of curvature with respect to these
functions, we consider a general non-submodular objective
function. This is significant because a decomposition of a
normalized monotone set function into a sum of submodular
and supermodular functions is not guaranteed to exist as
shown in Lemma 3.2 of [12]. Additionally, we consider that
the greedy selection step is potentially suboptimal as in [10]
further distinguishing our contribution.
Most closely related to the second contribution of this
work are the results in [8] and [9] where a generalized
greedy algorithm is presented that does not require complete
information regarding preceding decisions. Notably, the gen-
eralized greedy algorithm in [9] enjoys a worst-case approxi-
mation guarantee of 1/(1 + k∗(G)) under a partition matroid
constraint assuming a normalized, monotone, submodular
objective function. The term k∗(G)represents the fractional
clique cover number of the underlying communication graph
Gand is discussed in detail in [9]. We extend the work of
[9] by considering the curvature of the objective function
which is assumed to be normalized and monotone, but not
necessarily submodular.
Paper Overview: This paper is organized as follows.
Mathematical preliminaries are presented and discussed in
Section II. Properties of the objective function including
notions of curvature are presented in Section III. The primary
contributions of this work are presented in Section IV. The
benefit of search objective function is presented in Section
V and properties of the objective function are analyzed in
Section VI. Conclusions are presented in Section VII. Proofs
are presented in the appendices.
II. MATHEMATICAL PRELIMINARIES
Let Vbe a finite ground set and let f: 2V7→ Rbe a set
function where 2Vdenotes the power set of V. Generally,
calligraphic font is used to denote sets. Exceptions should be
clear from context. We consider a team of Nagents given
as A={1, . . . , N}. In executing the greedy algorithm, each
agent iwill select an element xifrom the ground set Vin
an effort to maximize the objective function f.
When analyzing suboptimality of the greedy algorithm,
one often encounters matroids which are used to represent
constraints on the solutions produced by the greedy algo-
rithm. A matroid is defined as follows.
Definition 1 (Matroid [13, Definition 39.1] ): Consider a
finite ground set V, and a non-empty collection of subsets of
V, denoted by I. Then, the pair (V,I)is called a matroid if
and only if the following conditions hold:
i ) for any set X ⊆ V such that X ∈ I, and for any set
Z ⊆ X , it holds Z ∈ I;
ii ) for any sets X,Z ⊆ V such that X,Z ∈ I and |X | <
|Z|, it holds that there exists an element z∈ Z \ X
such that X ∪ {z}∈I.
Matroids are used to represent abstract dependence. The set
Icontains all independent sets of V. A set x∈ I is called
maximal if there exists no v∈ V such that x∪ {v} ∈ I.
For example, any subset of the columns of a matrix are
either linearly independent or dependent. A matroid may
be constructed by allowing the columns of the matrix to
form the ground set Vand every combination of linearly
independent columns to form I. An excellent discussion
of matroids and their history is provided in [13]. Most
importantly for our purposes, matroids provide a construct to
represent constraints in planning. Two notable special types
of matroids are uniform matroids [1], [4], [14] and partition
matroids [9], [10], [15], [16], [17].
To analyze suboptimality bounds for the greedy algorithm,
we utilize the marginal reward.
Definition 2 (Marginal Reward): Given any sets S,Q⊆V
the marginal contribution of Sgiven Qis
∆(S|Q),f(S ∪ Q)−f(Q).(1)
We adopt notation for the marginal reward from [8] and [9]. It
may be useful to consider the marginal reward as the discrete
derivative of fas discussed in [15].
Note that given an arbitrarily ordered set S ⊆ V, the
reward attained by Smay be represented as the sum of the
marginal rewards for the elements of S. For example, given
S={s1, . . . , sM}, we have
f(S) =
M
X
i=1
∆({si}| ∪i−1
j=1 {sj})(2)
where ∪i−1
j=1{sj}is the empty set when i= 1. For conve-
nience, we use the subscript notation s1:i−1=∪i−1
j=1{sj}
to denote a subset of an ordered set. We also often use a
comma in place of the union operator, e.g. ∆(S|Q,Z) =
∆(S|Q ∪Z). Lastly, when representing a single element in a
set, we often drop the curly brace notation, e.g. ∆({si}|Q) =
∆(si|Q).
By (2), given sets S,Q⊆Vwith |S| =MSand |Q| =
MQand arbitrary orderings S={s1, . . . , sMS}and Q=
{q1, . . . , qMQ}we have
f(S,Q) = f(S) +
MQ
X
i=1
∆(qi|q1:i−1,S)(3)
f(S,Q) = f(Q) +
MS
X
i=1
∆(si|s1:i−1,Q).(4)
For the case of informative path planning, let Xirepresents
the set of paths available to agent iand let V=∪i∈AXi. We