📕 subnode [[@flancia.org/metta]] in 📚 node [[metta]]

Computer Science and Machine Learning provide an effective vocabulary to talk about mental states. As I go about my life, I tend to think of the computational processes I could be applying to solve tasks. I'm not sure everybody does this as often as I do; probably geeks do it more often? I'm doing it more and more since I started meditating and became more aware of my mental processes and formations.

While we live, we experiment things; feel a series of qualias. While we experiment with the world and feel things, we learn. When we learn, we adjust our internal neurological state.

More in computing terms: feelings about things and ideas are linked to weights in our models. Qualias are vectors; each dimension is a characteristic of the experience. Experimenting a qualia is evaluating a condition. Usually we experiment many qualias muddled up together; we are evaluating batches. Summing up: the weights we adjust when we learn determine the experiences we feel on different stimuli; they might be the qualia.

Concentration ("Samatha") lets you evaluate your mental state while considering fewer items at a time. It lets you focus more on individual weights in the model, observe them and update them individually.

You believe in things that ended up with a high weight in the "belief" dimension of its qualia. You believe these things because they happen to be true, or useful, or otherwise important. Wisdom, as distinct from knowledge, is the sum of all this information you carry.

Metta is theory of mind training.

Metta lets you experiment the affinity weights that link you to other people in your life. You feel happy when they feel happy; how much? You feel miserable when they suffer; how much? It might be that some weights are lazy-initialization; reading them actually sets them (lazy here means they are set to NaN and don't affect batch calculations). Or perhaps reading them also reinforces them; if you check for a feeling often enough, and it comes up positive, perhaps it comes up more positive every time (TODO: look up neurological correlates)..

The self is a cluster of ideas we are positively disposed to; our identity is the set of things that that a) make us happy for one reason or another (we feel as if we "choose" those), or b) we don't like but otherwise are useful (for example: they are good at predicting how we will behave in the future, like character traits we don't want but carry).

Feelings of happiness related to other people let you identify with those people; move your self distance operator towards a gravity center not within yourself. Your self expands as you do Metta. As you do it with yourself, you also realize that the same mental processes you use to build a mental model of what other people think (theory of mind) are (or at least feel) like the ones that you use to think in first person.

Hypothesis: they are the same. "Being social" probably comes after just "being" (and trying to understand your being) evolutionarily speaking. So it would make evolutionary sense for it to build on the previous neural pathways; a mutation is more likely to trigger a new module than an independent system.

Once you realize all this, you can can choose to adjust your self any way you like. Part of your identity is group-linked; the group can be expanded, and your identification with it too (every time you do this, perhaps you "feel" it to be true a bit more). You are not the same as all people in all the ways; but if you care about them, and you want them to be happy, well: you've got something in common with all of it.

Eventually the significant weights you use to decide whether some entity is in your "self" or not flips. In a practical way, your self can become "everybody".

This is the nature of the Buddha: Buddhism mostly agrees that it is core to all living beings.

You become the Buddha by identifying with your Buddha nature; believing this to be true.

📖 stoas
⥱ context