Genetic values - simulating phenotypes/traits.#

Note

The objects described here are passed to the gvalue parameter when initializing instances of fwdpy11.ModelParams.

The previous section discussed setting up a model where a mutation’s effect size (fwdpy11.Mutation.s) directly affects individual fitness. An alternative model is one where mutations affect some abstract “trait” or “phenotype” and a separate function maps trait values to fitness.

Let’s consider the standard model of evolutionary quantitative genetics:

  • Mutations have additive effects on trait values

  • The fitness of a trait value is a quadratic function of its distance from an “optimum” trait value.

In fwdpy11, a non-mutant individual has a phenotype of 0.0. Trait values are additive over the values contributed by individual genotypes according to the following table:

Genotype

AA

Aa

aa

Trait value

\(0\)

\(hs\)

\(scaling\times s\)

(If we model multiplicative effects on a trait, a non-mutant individual still has a value of 0.0. The internal machinery handles this so that you don’t have to worry about it.)

To specify an additive effects model of a trait under Gaussian stabilizing selection with an optimum trait value of 0.0 and (inverse) strength of stabilizing selection VS = 1.0, we write:

import fwdpy11

gss = fwdpy11.GaussianStabilizingSelection.single_trait([fwdpy11.Optimum(optimum=0.0, VS=1.0, when=0)])
gvalue = fwdpy11.Additive(
    scaling=2.0, gvalue_to_fitness=gss,
)

Here, we are using a second parameter to initialize a “genetic value to fitness” map stored in an instance of fwdpy11.Additive. (fwdpy11.Multiplicative also supports such maps.) See fwdpy11.GaussianStabilizingSelection for details.

We can also add Gaussian noise to an individual’s trait value:

import numpy as np
gss = fwdpy11.GaussianStabilizingSelection.single_trait([fwdpy11.Optimum(optimum=0.0, VS=2.0 / 3.0, when=0)])
gvalue = fwdpy11.Additive(
    scaling=2.0,
    gvalue_to_fitness=gss,
    noise=fwdpy11.GaussianNoise(mean=0.0, sd=np.sqrt(1.0 / 3.0)),
    )

The last example requires some explanation:

  • We want VS = 1.0. We can decompose VS = VW + VE, where VW and VE are the additive contributions of genetic and environmental effects.

  • Here, the environmental effect is a Gaussian with mean zero and variance 1/3. The class is parameterized with the standard deviation, however, so we need to pass on the square root.

  • We then set VS = 1 - 1/3 = 2/3 when initializing fwdpy11.Optimum.

Yes, this is a nomenclature issue! The VS argument to fwdpy11.Optimum really should be called VW and we’ll fix that in a future version and hopefully not break people’s code.

In general, there’s a good bit of subtlety to properly modeling quantitative traits. The machinery described here was used in [Tho19]. [Burger00] is an excellent technical reference on the topic. [WL18] also thoroughly covers a lot of relevant material.

For an example of another approach to modeling phenotypes often attributed to [EW10], see here.

Todo

Write (and refer to) an advanced section on pleiotropic models.

Changing the optimum phenotype during a simulation#

A sudden optimum shift#

The previous example set up a model where the optimum is stable for the entire simulation. We can parameterize a shifting optimum using fwdpy11.GaussianStabilizingSelection. For example, to shift the optimum from 0.0 to 1.0 at generation 100:

moving_optimum = fwdpy11.GaussianStabilizingSelection.single_trait(
    [
        fwdpy11.Optimum(when=0, optimum=0.0, VS=1.0),
        fwdpy11.Optimum(when=100, optimum=1.0, VS=1.0),
    ]
)

gvalue = fwdpy11.Additive(scaling=2.0, gvalue_to_fitness=moving_optimum)

Randomly-moving optimum#

Since we are working in Python, we can take advantage of existing libraries to implement interesting models. Let’s consider the following model of a randomly moving optimum:

  • There is a 1% chance each generation that the optimum shifts.

  • When a shift happens, a normal deviate with mean 0.0 and variance 0.1 is added to the current optimum.

  • The simulation will end at generation 1,000.

Let’s code it up:

optima = [fwdpy11.Optimum(when=0, optimum=0.0, VS=10.0)]

last_time = 0
last_optimum = 0.0

np.random.seed(666)

while last_time < 1000:
    last_time += int(np.random.geometric(0.01, 1)[0])
    last_optimum += np.random.normal(loc=0.0, scale=np.sqrt(0.1), size=1)[0]
    if last_time < 1000:
        optima.append(fwdpy11.Optimum(when=last_time, optimum=last_optimum, VS=10.0))

random_moving_optimum = fwdpy11.GaussianStabilizingSelection.single_trait(optima)
random_moving_optimum
fwdpy11.GaussianStabilizingSelection(is_single_trait=True, optima=[fwdpy11.Optimum(optimum=0.0, VS=10.0, when=0), fwdpy11.Optimum(optimum=np.float64(0.14621925773046504), VS=10.0, when=120), fwdpy11.Optimum(optimum=np.float64(0.43133329773747653), VS=10.0, when=250), fwdpy11.Optimum(optimum=np.float64(0.42015093754764493), VS=10.0, when=552), fwdpy11.Optimum(optimum=np.float64(0.3570913002785381), VS=10.0, when=557), fwdpy11.Optimum(optimum=np.float64(-0.0949791266685498), VS=10.0, when=568), fwdpy11.Optimum(optimum=np.float64(-0.08281396117205914), VS=10.0, when=704), fwdpy11.Optimum(optimum=np.float64(-0.4194925319000761), VS=10.0, when=726), fwdpy11.Optimum(optimum=np.float64(-0.09246489411724829), VS=10.0, when=875), fwdpy11.Optimum(optimum=np.float64(-0.37135303951111776), VS=10.0, when=876), fwdpy11.Optimum(optimum=np.float64(0.024965855928271607), VS=10.0, when=898)])

Note

Note the cast to int when updating the time. fwdpy11.Optimum is very picky about its input. It requires int for when and will raise an exception if the numpy.int64 from numpy.random.geometric() gets passed in.

S