Best approaches for ABM in cadCAD?

Before encountering cadCAD, I planned to use Mesa to build economic system models. cadCAD seems like a better choice!

I’m curious what approach others are using for simulating agents in cadCAD. In Mesa, there is an agent class with a step method that gets called (according to some scheduling scheme) at each timestep.

As far as I can tell there is not built-in agent functionality in cadCAD. Are there examples of ABM in cadCAD I should look at? If not, any suggestions about how to integrate agents and agent scheduling into a cadCAD simulation?

Thank you!

Adam

3 Likes

Hey Adam,

Glad to have you here. Strictly speaking cadCAD and Mesa are both tools in python, they are not mutually exclusive. That said, cadCAD provides a focus on the systematic study of the interaction patterns agents interact via in addition to the study of behavior and emergent properties. Specifically, cadCAD is designed to ask what-if questions about the platform, market or mechanism design in which the agents are interacting.

From out tutorial series, the agents go beyond 2 starting in tutorial 5. Here there is a network of agents where the network is declared using Networkx. This is a point of interoperability with Mesa which requires other python frameworks to hand dynamic games (cadCAD fills this role) and networked systems (networkx fills this role). You can check more on building up the ABM stack with python and mesa here:


Our current research has not yet required complex enough agents to necessitate use of Mesa but testing out the integration is on my radar. If you decide to try it, I hope you share your findings with this community.

Additionally, here is an example of a networked agent based model in cadCAD:

https://colab.research.google.com/drive/1Qsm3OMfgGQtpMhMp-rchCnfR71-HacSa

it’s available to play with directly in google collab. One note, its R&D in progress as part of the
https://commonsstack.org/
project:
https://colab.research.google.com/drive/1Qsm3OMfgGQtpMhMp-rchCnfR71-HacSa

Thanks!
-Z

2 Likes

Thanks Michael! This is helpful.

I’ve been through all the tutorials, but the Commons Stack conviction voting code gives me a better picture of how Networkx can be used to generate and manipulate agents (for anyone else looking, possibly the most relevant code is here: https://raw.githubusercontent.com/BlockScience/conviction/master/conviction_helpers.py and: https://raw.githubusercontent.com/BlockScience/conviction/master/conviction_system_logic3.py).

I wasn’t planning to model the system as a network. For now, it’s a relatively simple model – no governance or social activity, just economic transactions.

Any thoughts on when a system should be modeled as a network, and when not?

In cases when not using Networkx for managing agents, it seems to me it would be relatively simple to mimic the Mesa structure: define an agent class, with an instance for each agent. The list of agents would be a state variable, and each instance’s update method would be called from within the state variable update function (either directly, or via a separate generic agent updating function that could include various sequencing options).

In general, is there a guideline about what to include in policy functions vs. state update functions? Is there a reason to split agent property updates into policy (for update logic) and state update (for actually updating the objects)?

For my current needs, adding some agent managing pieces to cadCAD seems less cumbersome than integrating Mesa. On the other hand, looking again at the Mesa docs, integration should be pretty straightforward. Maybe I’ll experiment and report the results.

The multi-level Mesa paper looks excellent. More than what we need for the project I’m currently working on, but it would be exciting to have that kind of sophistication connected to cadCAD!

Thanks for your help,

Adam

1 Like

Thanks for starting this, Adam! Lots of interesting topics here I’ve been meaning to share some thoughts on.

Any thoughts on when a system should be modeled as a network, and when not?

Personally, I would say definitely go with the network modeling if you’re interested in graph measures or graph plotting. But if all you’re looking for is a more robust/convenient data structure to replace lists and dicts as state variables, then I’d argue that the first thing to consider is how fluent you and/or your team are in something like NetworkX versus something like Pandas, which in many cases might do the job just as well.

In cases when not using Networkx for managing agents, it seems to me it would be relatively simple to mimic the Mesa structure: define an agent class, with an instance for each agent. The list of agents would be a state variable, and each instance’s update method would be called from within the state variable update function (either directly, or via a separate generic agent updating function that could include various sequencing options).

Fully agree with this. Another approach could be instead of a list of agents have one list per agent attribute. I have yet to compare the memory efficiency of those two approaches.

In general, is there a guideline about what to include in policy functions vs. state update functions? Is there a reason to split agent property updates into policy (for update logic) and state update (for actually updating the objects)?

In my opinion, it helps to think of policy functions as the computation of the intended action, and the state update function as the computation of the results/consequences/outcomes of those intentions. For example: N agents => N policy functions each returning the force the agent will exert on an object => one state update function to compute the resulting force and the new position of the object, possibly also factoring in constraints imposed by the system state (eg. a wall).

Policy functions could also be used to increase efficiency when we need to compute some value from the current state in multiple state update functions in the same block - we can do the computation once in a policy function and the value will be passed as an input to all state update functions.

Hi mzargham!

Merry Christmas!

I am trying to run the examples you sent links for but I am missing the
bonding_curve_eq file. Could you please post a link for it?

I am also trying to run the pray_predator_agent example. I think this one will be the most helpful to me. Unfortunately it needs a lot of debugging. How did you run it without errors? I am trying to run it as a jupyter notebook and as a python 3 script.

Many thanks!

Hi Anabele!

Here is a link to the version of the bonding_curve_eq file, that I have in some of my ongoing R&D work:


Furthermore, the mathematic behind the bonding curves can be found in this academic paper, From Curved Bonding to Configuration Spaces:
https://epub.wu.ac.at/7381/

If you are also having trouble with predator prey, it’s possible there is a mismatch with you environment. @solsista was kind enough to create a doc to help with troubleshooting environments here:

I hope these help!
-Z