Skip to content Skip to sidebar Skip to footer

Cobb-douglas Functions Slows Running Tremendously. How To Expedite A Non-linear Calculation In Python?

I have a working microeconomic model running with 10 modules 2000 agents, for up to 10 years. The program was running fast, providing results, output and graphics in a matter of se

Solution 1:

It's hard to say how to fix it without seeing the context in the rest of your code; but one thing that might speed it up is pre-computing dummy quantities with numpy. For example, you could make a numpy array of each agent's total_balance and sum_qualification, compute a corresponding array of dummy_quantities and then assign that back to the agents.

Here's a highly-simplified demonstration of the speedup:

%%timeit
vals = range(100000)
new_vals = [v**0.5 for v in vals]
> 100 loops, best of 3: 15 ms per loop

Now, with numpy:

%%timeit
vals = np.array(range(100000))
new_vals = np.sqrt(vals)
>100 loops, best of3: 6.3 ms per loop

However, a slow-down from a few seconds to 3 minutes seems extreme for the difference in calculation. Is the model behaving the same way with the C-D function, or is that driving changes in the model dynamics which are the real reason for the slowdown? If the latter, then you might need to look elsewhere for the bottleneck to optimize.

Post a Comment for "Cobb-douglas Functions Slows Running Tremendously. How To Expedite A Non-linear Calculation In Python?"