> Ah, I thought—this is precisely where linear algebra can come to the rescue! Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.”
Two metaethical presuppositions stand out:
* That right and wrong (or right- and wrong-making action features) are discrete and quantifiable.
* That right and wrong (or ...) are expressible over communities as consequences to external individuals, rather than the properties of (potentially ineffectual!) actions themselves.
Written as counterclaims:
* We have no reason to believe that any right or wrong action corresponds to a particular number of morality credits, or that actions are independent (consider: most of us have some sort of intuition about bad things being excusable or forgivable if done only once).
* We have no reason to believe that the outcomes of our actions are what make them actually right. Consider the murderer who cooperates with the "moral" group to find ideal victims, but fails to kill them out of cowardice or incompetence -- a reasonable intuition to have would be that the murder is bad, despite external cooperation to the contrary.
What is legal is very much influenced by what is considered moral. See: slavery, sex outside marriage, homosexuality, free speech, the very notion of murder (stand your ground vs excessive self-defence) and of crime in general (whether malicious intent is present).
There are two opposite approachs to establish rules for a society: imposition and negotiation. I prefer negotiation. Morals tend to work in absolutes, so morality gets into laws by negotiation because persons with different morals make a deal. When majority morals gets unimpemded into law, with no regard for minorities, the result is not pretty.
Negotiation at the level of individual relationship to society and law is imposition with extra steps, but it is a desired sophistication of force. Sovereign citizens come to mind, talking about social contracts and moral law, but such things do not intermediate their relationship to government.
> A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.
One immediate property of this is it is too symmetric: assuming "honor amoung thieves" (immoral people cooperate with immoral people and refuse to cooperate with moral people), it is unchanged under interchange of "moral" and "immoral". That is, it can't tell you whether it's calling good "moral" or evil "moral".
Anyway, the circular definition I'm interested in is Sausserian structuralism: the meaning of a sign is determined by its place in a system of signs. I suppose in linguistics the article's construction then leads to eigenwords, but I am skeptical that eigenwords actually advance our understanding of semantics. I'm not as optimistic about the power of this construction to advance our epistemology as the author.
It's not symmetric at all ... “Happy families are all alike; every unhappy family is unhappy in its own way” ---> “[Moral people] are all alike; every [immoral person] is [immoral] in [their] own way”.
There are many more varieties of immoralities than there are of vanilla morality. Immorals are not only thieves. Immorals will cooperate with each other to further their agendas but recognize the vast spectrum of immoral motivations and behaviors. Corrupt politicians rely on Cambridge Analytica ... it's clear to [most of] immorals they're being immoral, they're under no illusions.
What is most interesting right now, to me, is the very real delusion I think I'm witnessing in the most "moral" segment of the US population. I grew up an Evangelical Protestant, child of missionary parents. To watch the current political support from this segment of a figure most would have unequivocally condemned 20 years ago is fascinating. I've recently watched a number of videos from Frank Schaeffer, including one dissecting US Attorney General William Barr's Notre Dame speech discussing general US morality [1]. Maybe I'll be corrected with grace in the afterlife, but I think Frank gets a lot of this right. (Frank's father, Francis Schaeffer, provided the underpinnings of the much of what became the Moral Majority and follow-on movements/organizations from the likes of Jerry Falwell, Pat Robertson, Franklin Graham, etc.)
"Happy families are all alike; every unhappy family is unhappy in its own way"
This, to me, is clearly false, and citing an aphorism in a work of fiction isn't particularly strong evidence for it. What I do believe is that, at any given moment, there are fewer moral courses of action than immoral courses of action, but that does not mean that in a given culture, of the realm of immoral actions available (in theory), the number used is much higher than the available number of moral actions. This is because people tend to defect from morality in similar ways. For example, not committing adultery has a fairly standard method of defection.
Really, I disagree with Scott's entire premise of defining morality purely in terms of cooperation. Why solely cooperation? What Scott is missing is that a moral system has to actually be viable across generations as well. That's easy for him to do with a model of infinitely lived agents, but harder to do in a model of finite lived agents where each generation has to "teach" the code to the next, and younger generations can choose to change the rule of the game.
For me, morality can only be observed in relation to adhering to a code of behavior, and then you need to define the morality of the code itself to get the actual morality of the behavior. Moreover, the morality of a code only makes sense in a relative context (is one code better than another code or than a group of competing codes). In my opinion, the best neutral definition of success is greater reproductive success, where by "reproductive", I mean social reproduction. So a moral system that reproduces itself across generations more successfully than another is more moral. This requires 1) that people within the group continue to believe in it across generations, and 2) the group that believes in it grows and thrives across generations.
Especially in a hostile world where there are rival groups and rival moral systems waiting to dismantle your group and your system, and there are many opportunities for decay from within as well. That code is supposed to light your way through these treacherous rocks, so that your society will survive the challenges it faces.
For example, communism won over large parts of Eurasia, but within a few generations people stopped believing in it and abandoned it. So it was not a very moral system. A more moral system would be one that survives for thousands of years, successfully reproducing in each generation and ensuring the reproductive success of those societies who practice it. Thus fundamentalist Islam would be a more moral system than communism. Whether fundamentalist Islam is more moral than liberal Western Democratic norms remains to be seen -- who will outlast whom?
And I think if you go back to traditional moral codes, they were explicitly designed with the threat or promise of reproductive success if the code was followed. E.g. from Deuteronomy 30:16-19
"In that I command thee this day to love the LORD thy God, to walk in his ways, and to keep his commandments and his statutes and his judgments, that thou mayest live and multiply: and the LORD thy God shall bless thee in the land whither thou goest to possess it. But if thine heart turn away, so that thou wilt not hear, but shalt be drawn away, and worship other gods, and serve them; I denounce unto you this day, that ye shall surely perish, and that ye shall not prolong your days upon the land, whither thou passest over Jordan to go to possess it. I call heaven and earth to record this day against you, that I have set before you life and death, blessing and cursing: therefore choose life, that both thou and thy seed may live"
So on a long enough time scale, might makes right, or morality is judged by the success of the people who adhere to it. If you follow this code, you will be stronger across many generations than a rival who follows a different code. That makes your moral code more correct. This is hinted at in suggestions like Tit-for-tat being more successful of a strategy, but then there is a lot of extraneous talk about "cooperation", when societies that explicitly orient themselves on the basis of maximizing cooperation end up collapsing after a few generations, or remain completely stunted and are wiped out by more robust societies that do not solely focus on cooperation. You can see Scott struggling with this in his eigenMoses or eigenJesus choice, but really that is the first of many stumblingblocks he would face if he had more agents interacting beyond the model of a repeated game.
Stability works until it doesn't. The Romans were very good at building a stable and powerful empire until the dominance/greed ethic that powered it, consumed it from the inside out and it collapsed.
I find Aaronson's output consistently baffling, because he seems to understand the world entirely through mathematical abstractions, and mathematical abstractions are fundamentally a bad way to solve problems if you don't understand the problem space.
You can create eigenmorality or eigendemocracy if you really want to, but it seems he's is more interested in the fact that they generate a neato mathematical puzzle than in the 4000 or so years of existing debate about ethics and government.
Marx, Durkheim, Weber, Smith, Hobbes, and many others will tell him far more about the challenges of politics and morality than linear algebra will.
But applying linear algebra creates a seductive illusion of powerful insight into a problem that's much harder than it appears to be.
In reality the "insight" is minimal and trite. Worse: the process which generates the insight reveals an unconscious ethical and political bias of its own - and also guarantees this solution will cause more problems than it solves.
There is quite a lot to learn from Rome's stability, but of course it wasn't perfect. But I do believe there are evolutionary forces at play where stronger societies displace weaker ones, and after the advent of writing, we can access the morality of long running societies and use that as an input into our own. This is a much more valuable inheritance from Rome than just the architecture or Latin alphabet. For example, rule of law (the Romans were fanatically legalistic) continues to be an important inheritance in the West, and the notion that there is something moral about following the rules and being a law-abiding citizen is an important part of the Western heritage.
But you are right, the idea that you can come up with a quaint mathematical argument (and the mathematical insight here is pretty basic) as a replacement for studying history and philosophy is very foolish.
No, corrupt politicians rely on the FBI and the deep state and corrupt media to try to impeach an anti-establishment president. Yet to a lot of people it's not obvious they're being immoral, because the bought and paid for media portrays the good orange man as "immoral".
To me, there is a clear indicator: human suffering.
A group that strives to induce suffering on the outgroup, or on uwilling members of ingroup, are "bad guys".
A group that strives to limit and reduce suffering, for the ingroup, outgroup, or both, are "good guys", as long they don't match the previous definition.
Corollary: "good guys" normally stay away from aggression.
> To me, there is a clear indicator: human suffering.
I like to divide morality into happiness ("utility") and rights. All else equal, increasing aggregate happiness is better. However, if increasing (the authority's measure of) aggregate happiness tramples a lot peoples' rights, that sucks.
Rights (and their implications) are easier to codify legally than happiness maximization. This eigenmagic seems easier to compute (because it's easier to encode) for happiness than for rights.
(One can frame rights as an implication of happiness maximization. Some economists do this. A (properly bounded) right to private property, for instance, increases net happiness by avoiding tragedies of the commons. Thievery should be illegal because stolen goods are on average less valuable to the thief than the victim. I don't know whether such analyses imply all the rights we'd like, or how absolutely.)
Indeed, "rights" is a reasonably good proxy for not increasing suffering. If nobody is entitled to arbitrarily take your possessions or do bodily harm to you, a number of the worst sources of suffering are removed, at least most of the time.
Since both happiness and utility are strictly subjective (e.g. not directly measurable or comparable), such proxies is our only hope to produce some formal, computable approaches to limiting of the suffering.
But the intention to limit suffering for most of the outgroup seems a key indicator for me.
In stories we tell, the good guys are often fighting, and defeating the bad guys.
Even though I would not say it is a good thing, I would respect somebody more who stood up for themselves and were aggressive in tense moments – if it seemed justified (provoked by the aggressive party).
You probably need to add "increase human happiness" or something along those lines, because otherwise the group of "let's painlessly euthanize all the other humans first, and then ourselves" guys are trivially "good guys" — they monotonically reduce the human suffering to zero.
In Buddhism, human existence is a source of inevitable suffering. The only way out is to (eventually) reduce your karma to nothing, and to not be born again.
The difference from your euthanizing sect is that nobody can be forced to follow this path (not just morally, but also technically).
Because the Eigenmoses matrix contains negative values, the Perron–Frobenius theorem does not apply, and so there is no unique largest eigenvalue, necessarily. This can be observed in the form of holy wars. Moreover, I think we can construct real niceness matrices none of whose eigenvalues are real.
"Back then, I was extremely impressed by a research project called CLEVER, which one of my professors, Jon Kleinberg, had led while working at IBM Almaden. The idea was to use the link structure of the web itself to rank which web pages were most important, and therefore which ones should be returned first in a search query. Specifically, Kleinberg defined “hubs” as pages that linked to lots of “authorities,” and “authorities” as pages that were linked to by lots of “hubs.” At first glance, this definition seems hopelessly circular, but Kleinberg observed that one can break the circularity by just treating the World Wide Web as a giant directed graph, and doing some linear algebra on its adjacency matrix."
Interesting how multiple people can have the same idea — about 10 years ago I was working on the problem of having people on mechanical Turk label terms with meanings, and determined that applying page rank to the connected graph of nodes (where nodes where turkers, and edges were agreements in labeling) would allow me to derive a trust metric for each agent, allowing me to discard noise/bots easily from the dataset.
Two metaethical presuppositions stand out:
* That right and wrong (or right- and wrong-making action features) are discrete and quantifiable.
* That right and wrong (or ...) are expressible over communities as consequences to external individuals, rather than the properties of (potentially ineffectual!) actions themselves.
Written as counterclaims:
* We have no reason to believe that any right or wrong action corresponds to a particular number of morality credits, or that actions are independent (consider: most of us have some sort of intuition about bad things being excusable or forgivable if done only once).
* We have no reason to believe that the outcomes of our actions are what make them actually right. Consider the murderer who cooperates with the "moral" group to find ideal victims, but fails to kill them out of cowardice or incompetence -- a reasonable intuition to have would be that the murder is bad, despite external cooperation to the contrary.