In this simulation i will show that expected value of the ensemble wealth increases with time but wealth over time is doomed.


We will start with 100 people, each having 100 \$ at the start of simulation. Each person, at every step, can win 0.5 $\times$ wealth, or lose 0.4 $\times$ wealth.

In [1]:
import numpy as np, pandas as pd, matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')

The process starts in the way showed below.
The Average of the ensemble wealth at the beginning of the process is 100.

In [2]:
starting_process = pd.DataFrame({i:100 for i in np.arange(100)}, index = ['step 0'])
starting_process['Sample Average'] = starting_process.mean(axis=1)
starting_process
Out[2]:
0 1 2 3 4 5 6 7 8 9 ... 91 92 93 94 95 96 97 98 99 Sample Average
step 0 100 100 100 100 100 100 100 100 100 100 ... 100 100 100 100 100 100 100 100 100 100.0

1 rows × 101 columns

The ensemble_step function updates the wealth of the ensamble of a single step</b>

In [3]:
def ensemble_step(single_wealths, N):
    f = lambda x: x * np.random.choice([0.6,1.5])
    new_step = single_wealths.iloc[-1].to_frame().T.apply(f)
    new_step.index = ['step %s'%N]
    new_step['Sample Average'] = new_step.apply(np.mean, axis=1)
    return pd.concat([single_wealths, new_step])

The ensemble_simulation simulates a single stocastic process of the ensemble wealth for N steps.

We start with a simulation of 8 steps. Below you can see how it looks. The last column is the sample average of each step.

In [4]:
def ensamble_simulation(starting, steps): # ensamble simulation where 'steps' is the number of steps we choose
    stocastic_process = starting
    for i in range(1,steps):
        stocastic_process = ensemble_step(stocastic_process,str(i))
    return stocastic_process
    
steps = 8
complete_process = ensamble_simulation(starting_process, steps)
complete_process #simulation of 8 steps
Out[4]:
0 1 2 3 4 5 6 7 8 9 ... 91 92 93 94 95 96 97 98 99 Sample Average
step 0 100.00 100.00 100.00 100.000 100.000 100.000 100.00 100.00 100.00000 100.0000 ... 100.00 100.000 100.0000 100.00 100.000 100.000 100.00 100.00 100.000 100.000000
step 1 150.00 150.00 150.00 150.000 150.000 60.000 150.00 150.00 60.00000 60.0000 ... 60.00 150.000 150.0000 150.00 60.000 60.000 150.00 150.00 60.000 108.118812
step 2 90.00 90.00 225.00 90.000 225.000 36.000 90.00 225.00 36.00000 36.0000 ... 90.00 225.000 225.0000 225.00 36.000 36.000 90.00 90.00 36.000 115.949221
step 3 54.00 135.00 337.50 54.000 337.500 21.600 54.00 337.50 21.60000 21.6000 ... 135.00 337.500 135.0000 337.50 21.600 21.600 135.00 135.00 21.600 122.883857
step 4 81.00 202.50 506.25 32.400 506.250 32.400 81.00 506.25 12.96000 12.9600 ... 202.50 506.250 202.5000 202.50 32.400 12.960 81.00 81.00 32.400 139.549067
step 5 121.50 121.50 303.75 48.600 759.375 19.440 48.60 303.75 7.77600 7.7760 ... 121.50 759.375 303.7500 303.75 19.440 19.440 48.60 48.60 48.600 148.137164
step 6 72.90 182.25 182.25 29.160 455.625 11.664 72.90 182.25 4.66560 11.6640 ... 182.25 455.625 455.6250 182.25 11.664 11.664 72.90 72.90 29.160 138.123911
step 7 109.35 109.35 109.35 17.496 273.375 17.496 43.74 109.35 2.79936 6.9984 ... 109.35 273.375 683.4375 109.35 17.496 17.496 109.35 43.74 17.496 132.634709

8 rows × 101 columns

Below we plot the values of the Sample Average for each step

In [5]:
averages = complete_process['Sample Average']
y = averages.to_frame().set_index(np.arange(steps))
ax = y.plot(grid=True,figsize=(10,10), linestyle = '--', marker = 'o');
ax.set_xlabel('Steps for the whole ensamble', fontsize=12);
ax.axis([-0.2,7.4, min(y.values)-min(y.values)/50, max(y.values)+ max(y.values)/50]);

Below we calculate the average of the sample averages.

In [6]:
averages.mean()
Out[6]:
125.6745926717682

Now we do the same thing, but we do 8 simulations, not only one.
So we have 8 stocastic wealth processes, each consisting of 8 steps

In [7]:
from itertools import product

list_averages = []
fig, axes = plt.subplots(2,4, figsize=(18,10))
t1 = (0,1)
t2 = (0,1,2,3)
generate_x_y = product(t1,t2)
for i in range(8):
    complete_process = ensamble_simulation(starting_process, steps)
    averages = complete_process['Sample Average']
    list_averages.append(averages.mean())
    g,k = next(generate_x_y)
    y = averages.to_frame().set_index(np.arange(steps))
    y.plot(grid=True, ax = axes[g,k], linestyle = '--', marker = 'o');
    axes[g,k].set_xlabel('Steps for the whole ensamble', fontsize=12);
    axes[g,k].axis([-0.2,7.4, min(y.values)-min(y.values)/50, max(y.values)+ max(y.values)/50 ])
   

Below you can see the averages of the sample averages for each stocastic wealth process.

In [8]:
averages_value = pd.Series(list_averages) # sample averages of 8 simulation with 8 time steps
averages_value
Out[8]:
0     99.277068
1    113.333344
2    135.234650
3    118.643156
4    107.372753
5    121.149934
6    130.017076
7    112.545082
dtype: float64

To understand how this column moves with time, we increase the number of steps of each simulation
Below we have 8 stocastic processes, and each process consists of 50 steps.

In [9]:
steps = 50
complete_process = ensamble_simulation(starting_process, steps)

list_averages = []
fig, axes = plt.subplots(2,4, figsize=(18,10))
t1 = (0,1)
t2 = (0,1,2,3)
generate_x_y = product(t1,t2)
for i in range(8):
    complete_process = ensamble_simulation(starting_process, steps)
    averages = complete_process['Sample Average']
    list_averages.append(averages.mean())
    g,k = next(generate_x_y)
    y = averages.to_frame().set_index(np.arange(steps))
    y.plot(grid=True, ax = axes[g,k], linestyle = '--');
    axes[g,k].set_xlabel('Steps for the whole ensamble', fontsize=12);
    axes[g,k].axis([-2,53, min(y.values)-min(y.values)/50, max(y.values)+ max(y.values)/50 ])

Let's see the column of the averages value. The expected value of the ensemble wealth has increased.

In [10]:
averages_value = pd.Series(list_averages) # sample averages of 8 simulations with 50 time steps
averages_value
Out[10]:
0    641.684589
1    378.852719
2    360.285598
3    230.145481
4    209.629599
5    415.774149
6    193.489833
7    201.508858
dtype: float64

Ok, now we carry on the steps of each simulation from 50 steps to 350 steps.
Below you can see the paths of 8 stocastic processes, each consisting of 350 steps.
What do you notice?

In [11]:
steps = 350
complete_process = ensamble_simulation(starting_process, steps)

list_averages = []
fig, axes = plt.subplots(2,4, figsize=(18,10))
t1 = (0,1)
t2 = (0,1,2,3)
generate_x_y = product(t1,t2)
for i in range(8):
    complete_process = ensamble_simulation(starting_process, steps)
    averages = complete_process['Sample Average']
    list_averages.append(averages.mean())
    g,k = next(generate_x_y)
    y = averages.to_frame().set_index(np.arange(steps))
    y.plot(grid=True, ax = axes[g,k], linestyle = '--');
    axes[g,k].set_xlabel('Steps for the whole ensamble', fontsize=12);
    axes[g,k].axis([-2,360, min(y.values)-min(y.values)/50, max(y.values)+ max(y.values)/50])

In the graphs showed above, i notice that all the stocastic process are doomed in the long term
So, as only every single wealth person is doomed in time, in the same way the ensemble wealth is doomed in time under the multiplicative dynamics of the hypothesis.
But what about the expectated value of the wealth?
Below you can see the averages for each stocastic process...

In [12]:
averages_value = pd.Series(list_averages) # sample averages of 8 simulations with 350 time steps
averages_value
Out[12]:
0      758.618391
1     2031.511236
2    36964.693195
3      199.577018
4    32382.300980
5    96017.970895
6     1360.557004
7     2712.400275
dtype: float64
In [13]:
averages_value.mean()
Out[13]:
21553.453624254103

We start with an ensemble wealth average of 100.
After 350 steps, the ensemble wealth average is 21553.
The wealth is doomed over time (steps), but the ensemble average increases with time.
The ensemble average isn't a good benchmark for the variable over time under multiplicative dynamic.
Here the discussion on Twitter


In [ ]: