notes: collected from the past few days
applying data-oriented design
- math is faster than l1 reads! sometimes it can be better to perform redundant calculations rather than cache the result of math expressions.
- cpu is very fast - main memory is slow. don't rely on main memory for as much - shift more and more to the CPU!
- don't have cache misses : (
- identify when you have many objects in memory, make the size of each object smaller and smaller, better and better - iterate!
-
compact the amount of information in each struct - particularly the
structs that are used most in your programs, as compressing this
information optimizes everything about the program at hand. how do i
represent the same information with a much smaller mmeory footprint?
- remove boolean in another way - instead of storing a boolean tag in structs, split the structs into two arrays! (one goes in one array for which b is true; the other holds a boolean for which b is false)
- now there's no reason to load the invalid things - we can use logic to determine what to check and remove cache misses! store booleans "out of band" and make sure the information you touch is always relevant.
- array of structs -> struct of arrays - can significantly reduce memory footprint.
- move sparse data out of band so that it isn't allocated for in the original structs - i.e. if only 10% of monsters hold something, create some lookup table that prevents most structs from requiring allocation for that memory, then store this sparse data in a hashmap instead.
- instead of having "maybe" fields inside of the struct, use an 'OO' inheritance-like approach to create specific structs with more fields rather than feeding everything to the superstruct and keeping things optional!
- make as much as possible that requires categorization "disappear" into the encoding tag - OO-like specific objects allow us to pull information out! if everything needs some additional information, can make a 'generic' field in the superstruct and specify it further later.
- choose encodings according to your actual data distributions - this allows you to calculate approx. what an optimal structure of your data is based on how different options for your data are distributed!
structured programming
src found this from halt and catch fire - from the bohm-jacopini theorem essentially: flowcharts can compute any computable function if they combine subprograms in these ways:
- execute subprograms sequentially
- execute one or two subprograms according to result of an expression
- repeat executing a subprogram as long as a bool is true
naturally, these three logical control flows map to sequential operations, if expressions, and while loops, progressively. these programs are allowed to track additional information in order to keep track of what's tracked throughout the programming - this inspired structured programming! (in reality, structured programming is a disciplined subset of c lol.)
"folk version": a single global while loop with a series of conditionals able to model any program.
dementia care and compassion
src: from shannon mattern, an incredible interdisciplinary researcher and academic. good "twitter personality" too : )
this hit hard - aging and dementia are frightening.
"beautiful mess" of software dev
https://cutlefish.substack.com/ weird name product often picks a direction or tactic too late or too early - choosing when to switch, retool, etc is incredibly important! feature factory? https://medium.com/hackernoon/12-signs-youre-working-in-a-feature-factory-44a5b938d6a2
- public document at doc.anagora.org/2021-11-27
- video call at meet.jit.si/2021-11-27