Skip to main content
Social Sci LibreTexts

19.3: Multi-tasking without Homogeneity

  • Page ID
    43866
    • Anonymous
    • LibreTexts

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
    Learning Objectives
    • How does an employer compensate an agent who performs several tasks of varying importance? What happens when the tasks conflict?

    Multi-tasking refers to performing several activities simultaneously. All of us multitask. We study while drinking a caffeinated beverage; we think about things in the shower; we talk all too much on cell phones and eat french fries while driving. In the context of employees, an individual employee is assigned a variety of tasks and responsibilities, and the employee must divide her time and efforts among the tasks. Incentives provided to the employee must direct not only the total efforts of the employee, but also the allocation of time and effort across activities. An important aspect of multitasking is the interaction of incentives provided to an employee, and the effects of changes in one incentive on the behavior of the employee over many different dimensions. In this section, we will establish conditions under which the problem of an employer disaggregates; that is, the incentives for performing each individual task can be set independently of the incentives applied to the others.

    This section is relatively challenging and involves a number of pieces. To simplify the presentation, some of the analyses are set aside as claims.

    To begin the analysis, we consider a person who has n tasks or jobs. For convenience, we will index these activities with the natural numbers 1, 2, …, n. The level of activity, which may also be thought of as an action, in task i will be denoted by xi. It will prove convenient to denote the vector of actions by \(x=(x 1, \ldots, x n)\). We suppose the agent bears a cost c (x) of undertaking the vector of actions x. We make four assumptions on c:

    1. c is increasing in each xi.
    2. c has a continuous second derivative.
    3. c is strictly convex.
    4. c is homogeneousHomogeneous functions were defined in Chapter 10. of degree r.

    For example, if there are two tasks (n = 2), then all four of these assumptions are met by the cost function \(c(x 1, x 2)=x 12+x 22+1 / 2 \times 1 \times 2\). This function is increasing in \(x_{1} \text { and } x_{2}\), has continuous derivatives, is strictly convex (more about this below), and is homogeneous of degree 2.

    It is assumed that c is increasing to identify the activities as costly. Continuity of derivatives is used for convenience. Convexity of c will ensure that a solution to the first-order conditions is actually an optimum for the employee. Formally, a function is a convex function such that, for any vectors xy and scalar α between zero and one

    \[(0 \leq a \leq 1), a c(x)+(1-a) c(y) \geq c(a x+(1-a) y) \nonumber \]

    In other words, a convex function is any function that lies below the straight line segment connecting two points on the function, for any two points in the interval, when x is a scalor.

    One way of interpreting this requirement is that it is less costly to do the average of two things than the average of the costs of the things. Intuitively, convexity requires that doing a medium thing is less costly than the average of two extremes. This is plausible when extremes tend to be very costly. It also means the set of vectors that cost less than a fixed amount, {x | c(x) ≤ b}, is a convex set. Thus, if two points cost less than a given budget, the line segment connecting them does, too. Convexity of the cost function ensures that the agent’s optimization problem is concave and thus that the first-order conditions describe a maximum. When the inequality is strict for α satisfying 0 < α < 1, we refer to convexity as strict convexity.

    The assumption of homogeneity dictates that scale works in a particularly simple manner. Scaling up activities increases costs at a fixed rate r. Homogeneity has very strong implications that are probably unreasonable in many settings. Nevertheless, homogeneity leads to an elegant and useful theory, as we shall see. Recall the definition of a homogeneous function: c is homogeneous of degree r means that for any \(\lambda>0, c(\lambda x)=\lambda r c(x)\).

    Claim: Strict convexity implies that \(r>1\).

    Proof of Claim: Fix any x and consider the two points x and λ x. By convexity, for

    \[0<\mathrm{a}<1,(\mathrm{a}+(1-\mathrm{a}) \lambda \mathrm{r}) \mathrm{c}(\mathrm{x})=\mathrm{ac}(\mathrm{x})+(1-\mathrm{a}) \mathrm{c}(\lambda \mathrm{x})>\mathrm{c}(\mathrm{ax}+(1-\mathrm{a}) \lambda \mathrm{x}))=(\mathrm{a}+(1-\mathrm{a}) \lambda) \mathrm{r} \mathrm{c}(\mathrm{x}) \nonumber \]

    which implies \((a+(1-a) \lambda r)>(a+(1-a) \lambda) r\).

    Define a function k that is the left-hand side minus the right-hand side:

    \(\mathrm{k}(\mathrm{a})=\mathrm{a}+(1-\mathrm{a}) \lambda \mathrm{r}-(\mathrm{a}+(1-\mathrm{a}) \lambda) \mathrm{r}\). Note that \(\mathrm{k}(0)=\mathrm{k}(1)=0\). Moreover, \(k^{\prime \prime}(a)=-r(r-1)(a+(1-a) \lambda) r-2(1-\lambda) 2\). It is readily checked that if a convex function of one variable is twice differentiable, then the second derivative is greater than zero. If

    \(r \leq 1, k^{\prime \prime}(a) \geq 0\), implying that k is convex, and hence, if \(0<a<1, k(a)=k((1-a) 0+a 1) \leq(1-a) k(0)+a k(1)=0\).

    Similarly, if r > 1, k is concave and k(α) > 0. This completes the proof, showing that r ≤ 1 is not compatible with the strict convexity of c.

    How should our person behave? Consider linear incentives, which are also known as piece rates. With piece rates, the employee gets a payment pi for each unit of xi produced. The person then chooses x to maximize \(u=\sum i=1 n p i x i-c(x)=p \cdot x-c(x)\).

    Here • is the dot product, which is the sum of the products of the components.

    The agent chooses x to maximize u, resulting in n first-order conditions \(\partial u \partial x i=p i-\partial c(x) \partial x i=p i-c i(x)\), where ci is the partial derivative of c with respect to the ith argument xi. This first-order condition can be expressed more compactly as \(0=p-c^{\prime}(x)\) where c ′ (x) is the vector of partial derivatives of c. Convexity of c ensures that any solution to this problem is a global utility maximum because the function u is concave, and strict convexity ensures that there is at most one solution to the first-order conditions.This description is slightly inadequate because we haven’t considered boundary conditions. Often a requirement like xi ≥ 0 is also needed. In this case, the first-order conditions may not hold with equality for those choices where xi = 0 is optimal.

    One very useful implication of homogeneity is that incentives scale. Homogeneity has the effect of turning a very complicated optimization problem into a problem that is readily solved, thanks to this very scaling.

    Claim: If all incentives rise by a scalar factor α, then x rises by α 1 r−1 .

    Proof of Claim: Note that differentiating \(c(\lambda x)=\lambda r c(x)\) with respect to xi yields \(c^{\prime}(\lambda x)=\lambda r-1 c^{\prime}(x)\), and thus \(c^{\prime}(\lambda x)=\lambda r-1 c^{\prime}(x)\). That is, if c is homogeneous of degree r, c ′ is homogeneous of degree r – 1. Consequently, if \(0=p-c^{\prime}(x), 0=a p-c^{\prime}(a 1 r-1 x)\). Thus, if the incentives are scaled up by α, the efforts rise by the scalar factor α 1 r−1 .

    Now consider an employer with an agent engaging in n activities. The employer values the ith activity at vi and thus wishes to maximize \(π= ∑ i=1 n ( v i − p i ) x i = ∑ i=1 n ( v i − c i (x)) x i\).

    This equation embodies a standard trick in agency theory. Think of the principal (employer) not as choosing the incentives p, but instead as choosing the effort levels x, with the incentives as a constraint. That is, the principal can be thought of as choosing x and then choosing the p that implements this x. The principal’s expected profit is readily differentiated with respect to each xj, yielding \(0= v j − c j (x)− ∑ i=1 n c ij (x)) x i \).

    However, because cj(x) is homogeneous of degree r – 1,

    \[∑ i=1 n c ij (x)) x i = d dλ c j (λx) | λ=1 = d dλ λ r−1 c j (x) | λ=1 =(r−1) c j (x), \nonumber \]

    and thus

    \(0=v j-c j(x)-\Sigma i=1 n c \text { ij }(x)) \times i=v j-r c j(x)\)

    This expression proves the main result of this section. Under the maintained hypotheses (convexity and homogeneity), an employer of a multitasking agent uses incentives that are a constant proportion of value; that is, \(p j=v j r\), where r is the degree of homogeneity of the agent’s costs. Recalling that r > 1, the principal uses a sharing rule, sharing a fixed proportion of value with the agent.

    When agents have a homogeneous cost function, the principal has a very simple optimal incentive scheme, requiring quite limited knowledge of the agent’s cost function (just the degree of homogeneity). Moreover, the incentive scheme works through a somewhat surprising mechanism. Note that if the value of one activity, for example, Activity 1, rises, p1 rises and all the other payment rates stay constant. The agent responds by increasing x1, but the other activities may rise or fall depending on how complementary they are to Activity 1. Overall, the agent’s substitution across activities given the new incentive level on Activity 1 implements the desired effort levels on other activities. The remarkable implication of homogeneity is that, although the principal desires different effort levels for all activities, only the incentive on Activity 1 must change.

    Key Takeaways

    • Multi-tasking refers to performing several activities simultaneously.
    • In the agency context, multitasking refers to the incentives of a principal to compensate different tasks.
    • A simple model of multitasking provides a convex cost of a set of tasks that is homogeneous of degree r in the tasks. This means that scaling up activities increases costs at a fixed rate r.
    • With piece rates, the employee gets a fixed payment for each unit produced.
    • One very useful implication of homogeneity is that incentives scale. If all incentives rise by a scalar factor α, then x rises by α 1 r−1 , where r is the degree of homogeneity.
    • Given convexity and homogeneity, an employer of a multitasking agent uses incentives that are a constant proportion of value; that is, \(p j=v j r\).

    This page titled 19.3: Multi-tasking without Homogeneity is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Anonymous via source content that was edited to the style and standards of the LibreTexts platform.