#rationality

LIVE

analytictsar:

I think one thing that happens is that people get annoyed by something harmless or is just kind of a show of solidarity because they see it a bunch and then try desperately to form some deep moral reason to criticize the person doing it and it could be avoided by examining your emotions for more than 2 seconds.

“Causation is not an excuse, however, for all behaviour is caused. If causation were an excuse, no one would ever be held responsible for any behaviour… Causation is not the issue; nonculpable lack of rationality and compulsion is.”

— S J Morse, “Excusing the Crazy: The Insanity Defense Reconsidered” (1985)

“…one is a moral agent only if one is a rational agent. Only if we can see another being as one who acts to achieve some rational end in the light of some rational beliefs will we understand him in the same fundamental way that we understand ourselves and our fellow persons in everyday life.”

— M Moore, Law and Psychiatry: Rethinking the Relationship(1984)

raginrayguns:

raginrayguns:

discoursedrome:

transgenderer:

thismetaculus question scott linked in the msot recent links is very strange to me. 50% odds of plant meat indistinguishable from meat meat *next year* seems crazy high to me! i feel like ive been trying all sorts of plant meats and they are all pretty far from how i remember real meat! i mean, theyre good, i like them, but theyre not THAT meatlike, i feel like anyone could tell

I feel like there’s some sort of phenomenon here like with people mistaking ELIZA for a human where people round off certain trivial signifiers to classification heuristics in order to be able to make calls as quickly and intuitively as possible, and then when those fail there’s a brief period of disorientation as people have to reorient their judgment. So like with language processing, being able to respond to what people say and use certain commonplace conversational gambits was the “thing that distinguishes robots from people" at one time even though it’s super trivial now, with plant-based meat the inclusion of heme probably screwed up a bunch of people’s mental classifications. But it’s not robust, it’s like a “shock” effect and so it leads to people being fooled by things that seem very obviously fake in the near term, which in turn feeds the headline hype complex for new tech.

at my last lab bbq the boss brought beyond meat and nobody noticed. That doesnt even have heme.

ppl just have low expectations of their boss grilling i think, plant meat at a steakhouse is gonna be held to a different standard

plant based meat indistinguishable from mcdonalds chicken nuggets i think could be very soon, cause how much of their flavor is from the meat itself anyway

I suspect focusing on the merits of the *implied* question of whether an indistinguishable plant-based meat substitute will exist by a deadline obscures the precise nature of the *actual* question being asked. In particular,

This question resolves ambiguously if no test that satisfies the above description is conducted by 2023-04-01.

If I’m understanding this correctly, a no vote is predicting not merely the absence of certain criteria, but the existence of a double-blind random study demonstrating distinguishability. I think, considering the practicalities of how studies get done, that P(study occurs | perfect meat substitute exists) > P(study occurs | perfect meat substitute does not exist), so the correct guess on the question is quite different than if it were just asking P(perfect meat substitute exists)

You’re in front of two identical boxes, exactly one of which you get to take home. One contains 1 million dollars, the other is contains 1 million in monopoly money. How much would you pay to get to peek into one box?

Granny stared at him. She hadn’t faced anything like this before. The man was clearly mad, but at the heart of this madness was a dreadful cold sanity, a core of pure interstellar ice in the center of the furnace. She’d thought him weak under a thin shell of strength, but it went a lot further than that. Somewhere deep inside his mind, somewhere beyond the event horizon of rationality, the sheer pressure of insanity had hammered his madness into something harder than diamond.

Terry Pratchett, Wyrd Sisters

This clown panic has to be a classic case of confirmation bias, right? Right?

kaiserin-erzsebet:

Though I will not say anything to spoil later parts of the book, keep an eye on the theme of rationality and irrationality with Seward and Renfield. Dracula as a novel plays with inverting dualities a lot.

Is Seward irrational in his approach? Is throwing himself into his work to cope with romantic rejection as scientific and rational as he thinks it is?

On the other hand, is Renfield actually as irrational as Seward thinks? Or do his behaviors have a strong internal logic?

There are problematic elements to the storyline (which should be a given with a Victorian asylum.) But the novel also actively questions whether Seward’s assumed superiority is productive or whether he makes matters worse by failing to grasp that his patient is rational or at least logically consistent in his beliefs.

Let talk about selective outrage.

The same people complaining about the Christmas song “Baby, It’s Cold Outside” have zero to say about God impregnating a virgin through non-consentual, adulterous rape.

It’s a fake controversy that was cooked up by PC police, SJWs & outrage hobbyists. They should save their outrage for reality.

Look around… there’s plenty to actually be pissed off about happening RIGHT NOW in reality. Snap out of it and cultivate and emotionally immune system.

#babettebombshell #criticalthinking #rationalthought #rationality #babyitscoldoutside #politicalcorrectness #PCpolice #lifecoaching
https://www.instagram.com/p/Brlc0XTl59g/?utm_source=ig_tumblr_share&igshid=rj389a8y09cq

slatestarscratchpad:

jbeshir:

Chesterton’s fence says “norms are like fences; they were constructed out of specific intent, for specific, legible reasons, and understanding those reasons should be a prerequisite to changing norms, which is a totally reasonable thing to do, once you bring me a reasonably complete explanation of those reasons”.

This is just plain an awful model of reality; they’re often not constructed by any particular person or people’s specific intent, the reasons for their persistence may have to do with their effectiveness but they’re often complex, situational, and highly illegible and resistant to being made legible.

Accordingly, legibly describing the reasons held by the person who intentionally put them in place is often impossible (as no such person ever existed) and legibly describing the reasons for their persistence in a way everyone agrees is more or less complete is extremely difficult and an impractical bar to ever changing anything.

Keep reading

Let me push back against this a little:

Joseph Henrich’s book on culture gives a lot of examples of cultural evolution going right. He starts with the example of cassava cultivation. Unless you prepare cassava in a very specific labor-intensive way that nobody could ever think up on their own, it contains a tiny bit of cyanide. If you eat improperly-prepared cassava for months and months, you’ll get chronic cyanide poisoning and eventually become weak and sickly. This isn’t necessarily easy to observe. You won’t start getting sick until months after you change your cassava-preparation style. Some people will get sick anyway regardless of how they prepare their cassava. Other people will get lucky, and be unusually resistant, and eat improperly-prepared cassava for years and do just fine. This is exactly the kind of thing people are terrible at noticing - remember, we still don’t have universal social agreement on whether antidepressants cure depression, because they take months to work and people may get more or less depressed for other reasons. This is the same kind of problem you face with preparing your cassava wrong. Nevertheless, the Native Americans had a tradition of preparing cassava correctly, and if anyone prepared it wrong then people would get angry at them for violating tradition.

We don’t expect primitive tribes to be able to invent cyanide biochemistry, so I agree with your statement that

“[Traditions] are often not constructed by any particular person or people’s specific intent, the reasons for their persistence may have to do with their effectiveness but they’re often complex, situational, and highly illegible and resistant to being made legible.”

But to me this seems like an argument in favor: the fact that cassava preparation is this complex and brilliant thing beyond the ability of any individual to invent or understand means we should be especially careful about discarding it.

I think our main difference is that it sounds like you believe Chesterton’s Fence is meant as an absolute statement: “You may never remove a tradition until you can make it legible”. I agree this would result in never removing most traditions.

But a world without Chesterton’s Fence seems equally dangerous - “You may never keep a tradition, or resist someone’s attempt to change it, unless you can make it legible”. Again, no one will ever be able to meet the burden of legibility for most traditions, so this means you can’t keep traditions like cassava preparation until you don’t need them anymore.

In other words, neither side will ever be able to meet the burden of legibility. “Complete enforcement of Chesterton’s Fence” means no tradition will ever change. But “complete abandonment of Chesterton’s Fence” means no tradition can ever endure.

You seem to agree with something like this when you say:

“At its mediocre, it acts as a push to not change things unless there’s someone who cares strongly enough, which is a reasonable heuristic in the actual world where producing full, generally accepted explanations for why norms are as they are is impractical. Rather than doing this through the “wait until someone cares enough to produce the demanded explanation” mechanism it proposes, it actually does this through a “wait until people are willing to just say ‘fuck you’ to people bringing up Chesterton Fence” mechanism, which is not particularly great discourse. It would be better to just argue “we should err conservative unless change is sufficiently good” directly.”

My impression is that the short two-word version of the “We should err conservative unless change is sufficiently good” argument is “Chesterton’s Fence”. You can make any heuristic sound stupid by treating it as an absolute. “People have a right to free speech” sounds stupid if you interpret it to mean “People can yell fire in a crowded theater”. “Treat men and women equally” sounds stupid if you interpret it to mean that there should be an equal number of maternity hospitals for both genders. Every principle gets interpreted as a push in one direction: “Remember this principle in general, but back off when it’s obviously stupid.

G. K. Chesterton had his own utopian plan for how to reform Britain and was perfectly happy to push it despite all of the traditions he knew it would overthrow, just like everyone else; interpreting him as someone who believed nothing should ever change is grossly unfair and ahistorical. He is actually very insistent on this point:

I claim a right to propose as a solution the old patriarchal system of a Highland clan, if that should seem to eliminate the largest number of evils. I claim the right to propose the complete independence of the small Greek or Italian towns, a sovereign city of Brixton or Brompton, if that seems the best way out of our troubles.

I assume Chesterton did not have a complete theory of why Brixton was not independent.

The people who want to abandon traditions are always going to have a big advantage in a rational society, because the reason for abandoning traditions will always be legible. If you want to stop preparing cassava the traditional way, you can always say “It takes hours and hours to prepare this cassava, and it tastes the same afterwards, this is a stupid waste of time”. This will be intuitively compelling and everyone will agree with you.

When one side has legible arguments and other side doesn’t, the first side will always win absent some other force. The only chance we have to resist this is if the heuristic in favor of keeping traditions is well-known and has a short catchy phrase that you can use to promote it with compelling visual imagery about being gored by a bull. This is Chesterton’s Fence.  Every time someone tries to change a tradition, someone else should say “But Chesterton’s Fence!” and then they should get to have an argument about it.

Then the person trying to change it will probably win, because legibility is a pretty big advantage. My ideal world is one where it is relatively easy for a few people to change tradition for themselves, but also easy for a majority to resist it for long enough to see how the first group of people did. If the first group of people seem to be doing well after a long time, that will eventually convince the holdouts (unless the holdouts are much different in some way that makes the tradition more important for them). I realize this is kind of simplistic, but I think it’s probably what works in real life.

The literal text of Chesterton’s Fence in chapter The Drift From Domesticity, in The Thingwas:

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, "I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.

And his first ever use of it later in the chapter was:

But, anyhow, this is the right end at which to begin any such enquiry; and not at the tail-end or the fag-end of some private muddle, by which Dick has become discontented or Susan has gone off on her own. If Dick or Susan wish to destroy the family because they do not see the use of it, I say as I said in the beginning; if they do not see the use of it, they had much better preserve it. They have no business even to think of destroying it until they have seen the use of it.

My complaint of the anti-domestic drift is that it is unintelligent. People do not know what they are doing; because they do not know what they are undoing.  There are a multitude of modern manifestations, from the largest to the smallest, ranging from a divorce to a picnic party. But each is a separate escape or evasion; and especially an evasion of the point at issue.  People ought to decide in a philosophical fashion whether they desire the traditional social order or not; or if there is any particular alternative to be desired.

(It took the time between its proposal and its first use for it to shift from “proposing removal of institution or law without an understanding of its use is wrong” to “acting to leave a bad family situation/have a divorce without a formal, philosophical argument for the use of the Family and plan for its replacement is wrong”, apparently)

I don’t object to a heuristic like the one discussed above, nor to it having a catchy name, but I don’t think Chesterton’s Fence is it. Chesterton’s writing pretty clearly does not say “the benefits cannot be made legible, and therefore you must have extra countervailing benefit”, it says “so go make the benefits legible, and then I’ll take a look at your benefit”. It’s definitely not saying “have extra benefit to counter illegibility”, or even “never do anything”, it’s saying “fix the illegibility”.

That’s distinctly different to the heuristic proposed above. And I think the general use is closer to Chesterton’s than to the require-extra-benefit-heuristic described above; consider your explanation in 2013:

Chesterton’s point is that “I can’t think of any reason to have a fence out here” is the worst reason to remove a fence. Someone had a reason to put a fence up here, and if you can’t even imagine what it was, it probably means there’s something you’re missing about the situation and that you’re meddling in things you don’t understand. None of this precludes the traveller who knows that this was historically a cattle farming area but is now abandoned – ie the traveller who understands what’s going on – from taking down the fence.

As with fences, so with arguments. If you have no clue how someone could believe something, and so you decide it’s stupid, you are much like Chesterton’s traveler dismissing the fence (and philosophers, like travelers, are at high risk of stumbling across bull.)

This is more pointing in the “fix the illegibility” direction than the “so have a stronger reason to believe you are right than would otherwise be needed” direction. It sounds reasonable, and for the case of philosophical arguments, it is mostly. But when it comes to norms, we cannot, in fact, answer the question of who had the reason to establish the family and why, so this turns into an impossible demand.

I would be much happier with a heuristic that was clearly “never change anything” rather than an impossible demand, because then it can be clearly stupid, in the manner your other example principles sometimes are, rather than sounding like universal due diligence, which is how Chesterton’s Fence was treated there in its application to philosophical arguments, and how Chesterton himself treated it in that chapter (I don’t take his failure to use it elsewhere as a sign that the wording was confusing; I just take it as simple inconsistency. It would be mildly remarkable if he wasn’t.)

That said, I think I’d stand by the idea that all those principles are bad arguments on their own. I wouldn’t think super much of someone whose contribution to a discussion was “but that violates Free Speech”, much as I don’t think much of a contribution which is just “but that violates Chesterton’s Fence”. “We should require more good from it, before considering it a good plan, because it violates Free Speech” would be a decent contribution, as would “We should require more good of it, before considering it a good plan, because it involves change to complex unpredictable systems which may do good in ways we don’t understand”. Rhetorical superweapons which you’re not allowed socially to say “yes, and I think countervailing benefits are worth it” to are not good arguments, in the general sense.

If the argument that CF is just a name for a prior against change rather than a demand for due diligence doesn’t stand, then the second argument that it’s a useful rhetorical stick for conservatism can’t alone address whether it’s a good argument very much at all.

(The cassava thing came up elsewhere in the thread! My general reaction is “seems to be more or less the thalidomide of process change”- so not contentless but also people are direly in need of looking at both sides of the balance sheet here. Agricultural process change has had its upsides, and I think deciding that shortening an industrial process by a few hours or days should no longer be viewed as enough to justify a break from status quo would be an overreaction to that data point. Ultimately, an advanced industrial civilisation is worth the risk of getting chronic cyanide sickness sometimes, even if it takes a quite long time for the civilisation to work out that “cyanide” is a thing and fix it. Modern medicine was worth thalidomide. The Industrial Revolution was worth the lead poisoning.)

rumplefuckingstiltzkin:

sang-the-sun-in-flight:

a-point-in-tumblspace:

sang-the-sun-in-flight:

jbeshir:

inferentialdistance:

plain-dealing-villain:

jbeshir:

jbeshir:

plain-dealing-villain:

jbeshir:

Chesterton’s fence says “norms are like fences; they were constructed out of specific intent, for specific, legible reasons, and understanding those reasons should be a prerequisite to changing norms, which is a totally reasonable thing to do, once you bring me a reasonably complete explanation of those reasons”.

This is just plain an awful model of reality; they’re often not constructed by any particular person or people’s specific intent, the reasons for their persistence may have to do with their effectiveness but they’re often complex, situational, and highly illegible and resistant to being made legible.

Accordingly, legibly describing the reasons held by the person who intentionally put them in place is often impossible (as no such person ever existed) and legibly describing the reasons for their persistence in a way everyone agrees is more or less complete is extremely difficult and an impractical bar to ever changing anything.

Keep reading

I think you’re misunderstanding Chesteron’s Fence quite badly. It does not claim that norms were constructed intentionally for explicit reasons. (And, as you note, that claim is false most of the time.) Despite being constructed implicitly and opaquely, norms aren’t constructed in a vacuum; there are reasons for them, even though few if any people could articulate those reasons.

To reasonably expect a better outcome, though, you must understand why the norm exists and what purpose it is serving. If you don’t understand or misunderstand what it is doing now, changing it blindly will only help things by chance.

You mention women’s suffrage as a reason it’s a bad heuristic; I counter that women’s suffrage in the US is an example of it being a good heuristic. The US women’s suffrage movement was heavily backed by the KKK, on the theory that white wives would vote what their husbands told them to and dirty immigrants would not. If they had followed Chesterton’s Fence and considered what purpose the restriction of the franchise was serving, they could have avoided putting their considerable political might behind the change that had the single biggest impact in killing their viability as a political organization.

Like most good heuristics, it’s value-neutral. Following it makes you better at achieving your goals, whatever they are.

For it to be a good heuristic it must oppose action when it is incorrect and support action when it is correct. It must discriminate between cases where action is good and bad.

My criticism is that it opposes action all the time, because the thing it demands be understood prior to change has never in the history of humanity been understood prior to a change being made. Citing examples where its opposition was correct in no way addresses the criticism, which is that it’s a heuristic that boils down to “never change anything” by setting an impossible prerequisite to change, based on an erroneous and moderately intellectually dishonest model that the prerequisite is remotely practical, masquerading as a request for reasonable due diligence.

The claim that it was correct in opposing women’s sufferage for some specific selfish subgroup at the time, does not change that it got the situation wrong, by for the vast bulk of people- including all those actually living in the same country as GK Chesterton- incorrectly opposing it, by being a complete stopped clock.

In practice people cite it when they intuit that more caution is good, and just forget it exists when they don’t intuit that, and they should just state their intuition rather than inconsistently appealing to an unsatisfiable standard.

Sometimes the intuition is right, but the argument never is; having the right conclusion occasionally doesn’t make the argument correct.

A new analogy worth promoting to a new top level post rather than just editing in: An equivalently formulated heuristic to Chesterton’s Fence where the error might be more intuitively obvious would be “never put any new medicine in your body until you understand the use of anything it changes, find out why those systems were created and understand their purpose”.

It’s equivalent to “never put any new medicine in your body ever”, as a heuristic goes, because the standard it describes has never and cannot viably be met. Humans are too complex.

And yet it sounds like a reasonable “heuristic” if you only look for cases where its opposition was right, and ignore all the ones where it was wrong, by assuming that on the latter it was only temporarily wrong and by getting more information it would have become right, resting on an implicit, incorrect supposition that the understanding it demands can be acquired cheaply and so its opposition would have been surmountable.

This is not to say that in any given context, “gather more data on what’s going on before trying the new medicine” is a wrong conclusion. But the standard being referenced to support it is impossible, and it’s only by failing to actually follow it and only citing it when it’s supported by an intuition that anyone could ever support use of any medicine ever.

“The advocates of this change haven’t produced an explanation for the creation and purpose of the previous status quo that is generally satisfying” is thus a bad argument for rejecting their change pending more information, even when that is actually a correct conclusion.

You’re still taking an excessively rigid view of what it means and a excessively high standard of what understanding means. It’s a very attainable standard and I’ve attained it multiple times in the last year when making important decisions.

For it to be a good heuristic it must oppose action when it is incorrect and support action when it is correct.

No, it just needs to result in more good done than bad relative to not adopting it.

…yes, and the way an heuristic, an algorithm for making decisions, accomplishes more good done than bad relative to not adopting it, is by distinguishing between good and bad decisions, in an accurate-enough-to-be-better-than-the-alternative manner.

I suppose I didn’t explicitly spell out the connection between predictive accuracy and consequential value for heuristics, but yes, the predictive accuracy of an heuristic affects its consequential value, although social effects from other people reacting to your adoption of it exist so I suppose they aren’t strictly equivalent.

At any rate, the point of that post was that you can’t demonstrate it makes more good done than otherwise by just pointing at true negatives; you also need to consider false negatives. (And in general you’d also need to consider both true and false positives, but “reject everything” has zero of both.)

Naturally false positives and false negatives may be bad to differing degrees; there’s choices to make when you proceed to actually try to quantify predictive accuracy of a classifier for ranking purposes for that reason.

And it’s possible you might find “reject everything” outperforms everything else available if false positives are bad enough. But if you do there are simpler ways to describe and define it than this.

I’m pretty sure Chesterton’s Fence, when used correctly, is just a prior that the structure of systems with humans in them is already subject to optimization and you need to account for the system and what its past members were trying to accomplish lest you be taken by surprise by some unknown consequences of the rule you’re changing.

The following two patterns seem very different to me:

  • Some agent designed a system that seems sub-optimal. They almost certainly built it to solve a problem; if you change it, you may re-introduce the problem.
  • Some non-agentic process resulted in a system that looks sub-optimal, and it’s been around for long enough that people have built atop it. If you change it, who knows what’ll come tumbling down?

A fence is a great metaphor for the first pattern. The second pattern, which I had never been consciously aware of until today (thanks!), seems valuable to have a handle for too – but a fence seems like a misleading metaphor for it. Chesterton’s… um… Non-Optimized Foundational System.

I am good at naming things.

The second thing is closer to Spaghetti Towers

The existence of Spaghetti Towers is a conclusive argument for the use of the Chesterton’s Fence heuristic. If you first identify a problem you believe you can fix, it might look something like this:

1) Identify whether or not the problem is in a Spaghetti Tower system. If not, proceed to last step. If so, proceed to step 2.

2) Identify all which requires stasis in the thing you mean to change in order to function. If you cannot do this, then this is a good place to stop. The Chesterton’s Fence heuristic has done it’s job. Change nothing. If you must, tinker at your own risk, and upon your own head be the consequences. If you can identify these things, proceed to step 3.

3) Identify the problems that would be caused in any remaining secondary systems by your having tinkered with the primary. If you cannot identify these, stop here. If you can, proceed to 4.

4) 4a) For each secondary problem, decide whether or not you can fix it. If and only if you can fix all secondary problems, or if and only if there are no such secondary problems you are unable to solve, then for each secondary problem, return to step 2. 4b) Repeat until you encounter a problem you cannot solve, or until you have a solution for each problem you have encountered. 4c) Only once you have a solution for each problem encountered, proceed to last step.

Last step) Update/improve/fix the problem(s). Chesterton’s fence has done it’s job.

In this model, the Chesterton’s Fence heuristic discourages fucking about with that which you do not understand or cannot solve (and therefore have a slim chance of improving).

It discourages ruination, but does not encourage anything other than not making anything worse. We have not yet provided a metric for defining what is correct and what is incorrect action, so I pose that this is good enough.

Deliberately not fucking around with volatile shit is correct action, and deliberately fucking around with volatile shit is incorrect action, when you are aware that your chances of improving things are astronomically slim.

I noted while I typed this out that when thought through in this manner, there is the possibility that one of your ‘primary’ functions is itself dependent on a secondary function that is dependent on the primary. In a loop like this, you never get to substep 4c), and therefore never change anything. Consequently, Chesterton’s Fence has again done it’s job, ensuring that you not mess with anything in this system, because it is too volatile to mess with.

However, it also occurs to me that in going through this process, one familiarizes themselves with the entire spaghetti tower, no matter how complicated. I believe it not beyond possibility that the tinkerer could create from scratch an entire completely new system to replace the old one, this time simpler, more efficient, and without any of those pesky loops. Should this happen, then again the Chesterton’s Fence heuristic has done it’s job, because now the tinkerer is equipped with both a new system and an argument for why the new system should replace the old.

@jbeshir TLDR Chesterton’s Fence is a good heuristic because it reminds you not to waste your time fixing overly complicated systems (discourages incorrect action: time wasting), but to simply replace them. (Encourages correct action: make improvements.)

A single correct negative prediction cannot vindicate a heuristic without examining the impacts of its false negatives. Same as how Thalidomide doesn’t prove the FDA is right to reject all drugs we don’t understand all the casual effects of (i.e. all), or even right to pursue its current much weaker policy. You can never get a conclusive argument by only looking at one side of a balance sheet, unless it’s so extreme you can infer the other side can’t possibly match it, which this isn’t. You need to look at the costs as well as the benefits of a policy.

A heuristic that outputs categorical opposition to such things as “abolition of slavery”, “women’s suffrage”, “LGBT rights”, and “abolition of enforced monogamy”, until such time as someone has ignored you and done it anyway, or we have solved sociology does, in fact, have its costs. It would have put you on the wrong side of many conflicts in the past, and so probably also today; it is an argument from an early 20th century social conservative, this is not an unintentional implication. And if you are not on the wrong side then you’re just deploying it inconsistently against your enemies.

Per morlock-holmes reply, one of the problems with CF is that fences are largely harmless, whereas norms and institutions very much are not. And when you fix this, change it to Chesterton’s Man Who Beats Up Gay People or whatever, the lack of consideration of those costs comes into plain relief and it stops sounding good.

Your workflow for rebuilding an entire social system in one go is infeasible, and it would have been very wrong to, say, delay the introduction of women’s sufferage until we could rebuild the spaghetti tower that is the family entirely- something we would not have been able to do in a century. And it would be similarly wrong to delay any further improvements (better handling of poly?) until we can fully model and rebuild the entire spaghetti tower in one go. Even assuming that this wouldn’t be insane hubris. Count the costs of the process you’re advocating. And I want to highlight that this workflow has never actually been executed successfully; this is an example of demanding a kind of understanding that has never successfully been achieved in the past on the basis of CF.

The claim people can do no better than chance at telling whether an incremental change is good or bad without a detailed philosophical grounding in the institutions being changed and their historical role is… a thing I’m going to invite people to reconsider if they really want to assert. (No better than chance is a very strong claim with a lot of counterintuitive implications and things that must be totally uncorrelated to be right)


In the example of agriculture brought up here, if we didn’t screw with what wasn’t entirely legible, we’d never have been able to have the Green Revolution. I’m sure there were campaigners against widespread use of fertiliser at the time out of risk of unknown consequences; a heuristic which if followed would make you one of them has costs. Consequences from runoff existed and happened; it was worth it. There’s certainly controversy over its social effects, which would suggest that we didn’t have a sufficiently deep model of how new practices would interact with social institutions at the time (secondary effects under your part 4) to avoid problems; does that mean we should have never instituted the new practices until we did, even though a half century later we still can’t do good sociological forecasting?

Today’s GMO debate is more arguable, but there’s certainly enough uncertainty in enough avenues, plant biology is not understood perfectly either, for someone to make a CF argument that would be very difficult to convincingly rebut if you accepted CF as A Rule We Should Follow. We don’t understand all the interactions of the biosphere and largely rely on our ability to bulldoze things like pest populations after it turns out a change in our behaviour causes a change in them, although we try to model things.


The author’s original use, to argue that people shouldn’t behave in a way that piecemeal dismantles a system by leaving a situation that is bad unless they have a philosophically reasoned argument for the dismantling of the system in abstract also has its costs.


Now, you’ve added that you can override it, although it be on your own head. I want to point out that this isn’t in the original:

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

It also isn’t consistently in your own stuff or in the way people use it; people don’t say “you should assign a cost to changing things to account for unknown goods they might do, and so only do so if there are clear benefits”. This would be reasonable, but it would not be a rhetorical superweapon. They say “don’t change a thing unless you understand it”, and resoundingly mock the idea anyone might understand anything. This is unreasonable, and a bad argument; the former could be a good one.

Everything is always risky, and it is always on your own head. There are risks to not addressing suffering in the world, too. The world is an uncertain place. A good argument for where to be inactive could be based on costs, it could define an information threshold that was in some cases reached, but even there it would capture too small a fraction of the balance sheet to use as a judgement overriding heuristic in the way people (inc. GKC) treat Chesterton’s Fence. It still wouldn’t be right often enough, and being honest about that would take away its rhetorical power. And that would be correct.

drethelin:

ponteh2dhh1ksdiwesph2tres:

a-point-in-tumblspace:

sang-the-sun-in-flight:

jbeshir:

inferentialdistance:

plain-dealing-villain:

jbeshir:

jbeshir:

plain-dealing-villain:

jbeshir:

Chesterton’s fence says “norms are like fences; they were constructed out of specific intent, for specific, legible reasons, and understanding those reasons should be a prerequisite to changing norms, which is a totally reasonable thing to do, once you bring me a reasonably complete explanation of those reasons”.

This is just plain an awful model of reality; they’re often not constructed by any particular person or people’s specific intent, the reasons for their persistence may have to do with their effectiveness but they’re often complex, situational, and highly illegible and resistant to being made legible.

Accordingly, legibly describing the reasons held by the person who intentionally put them in place is often impossible (as no such person ever existed) and legibly describing the reasons for their persistence in a way everyone agrees is more or less complete is extremely difficult and an impractical bar to ever changing anything.

Keep reading

I think you’re misunderstanding Chesteron’s Fence quite badly. It does not claim that norms were constructed intentionally for explicit reasons. (And, as you note, that claim is false most of the time.) Despite being constructed implicitly and opaquely, norms aren’t constructed in a vacuum; there are reasons for them, even though few if any people could articulate those reasons.

To reasonably expect a better outcome, though, you must understand why the norm exists and what purpose it is serving. If you don’t understand or misunderstand what it is doing now, changing it blindly will only help things by chance.

You mention women’s suffrage as a reason it’s a bad heuristic; I counter that women’s suffrage in the US is an example of it being a good heuristic. The US women’s suffrage movement was heavily backed by the KKK, on the theory that white wives would vote what their husbands told them to and dirty immigrants would not. If they had followed Chesterton’s Fence and considered what purpose the restriction of the franchise was serving, they could have avoided putting their considerable political might behind the change that had the single biggest impact in killing their viability as a political organization.

Like most good heuristics, it’s value-neutral. Following it makes you better at achieving your goals, whatever they are.

For it to be a good heuristic it must oppose action when it is incorrect and support action when it is correct. It must discriminate between cases where action is good and bad.

My criticism is that it opposes action all the time, because the thing it demands be understood prior to change has never in the history of humanity been understood prior to a change being made. Citing examples where its opposition was correct in no way addresses the criticism, which is that it’s a heuristic that boils down to “never change anything” by setting an impossible prerequisite to change, based on an erroneous and moderately intellectually dishonest model that the prerequisite is remotely practical, masquerading as a request for reasonable due diligence.

The claim that it was correct in opposing women’s sufferage for some specific selfish subgroup at the time, does not change that it got the situation wrong, by for the vast bulk of people- including all those actually living in the same country as GK Chesterton- incorrectly opposing it, by being a complete stopped clock.

In practice people cite it when they intuit that more caution is good, and just forget it exists when they don’t intuit that, and they should just state their intuition rather than inconsistently appealing to an unsatisfiable standard.

Sometimes the intuition is right, but the argument never is; having the right conclusion occasionally doesn’t make the argument correct.

A new analogy worth promoting to a new top level post rather than just editing in: An equivalently formulated heuristic to Chesterton’s Fence where the error might be more intuitively obvious would be “never put any new medicine in your body until you understand the use of anything it changes, find out why those systems were created and understand their purpose”.

It’s equivalent to “never put any new medicine in your body ever”, as a heuristic goes, because the standard it describes has never and cannot viably be met. Humans are too complex.

And yet it sounds like a reasonable “heuristic” if you only look for cases where its opposition was right, and ignore all the ones where it was wrong, by assuming that on the latter it was only temporarily wrong and by getting more information it would have become right, resting on an implicit, incorrect supposition that the understanding it demands can be acquired cheaply and so its opposition would have been surmountable.

This is not to say that in any given context, “gather more data on what’s going on before trying the new medicine” is a wrong conclusion. But the standard being referenced to support it is impossible, and it’s only by failing to actually follow it and only citing it when it’s supported by an intuition that anyone could ever support use of any medicine ever.

“The advocates of this change haven’t produced an explanation for the creation and purpose of the previous status quo that is generally satisfying” is thus a bad argument for rejecting their change pending more information, even when that is actually a correct conclusion.

You’re still taking an excessively rigid view of what it means and a excessively high standard of what understanding means. It’s a very attainable standard and I’ve attained it multiple times in the last year when making important decisions.

For it to be a good heuristic it must oppose action when it is incorrect and support action when it is correct.

No, it just needs to result in more good done than bad relative to not adopting it.

…yes, and the way an heuristic, an algorithm for making decisions, accomplishes more good done than bad relative to not adopting it, is by distinguishing between good and bad decisions, in an accurate-enough-to-be-better-than-the-alternative manner.

I suppose I didn’t explicitly spell out the connection between predictive accuracy and consequential value for heuristics, but yes, the predictive accuracy of an heuristic affects its consequential value, although social effects from other people reacting to your adoption of it exist so I suppose they aren’t strictly equivalent.

At any rate, the point of that post was that you can’t demonstrate it makes more good done than otherwise by just pointing at true negatives; you also need to consider false negatives. (And in general you’d also need to consider both true and false positives, but “reject everything” has zero of both.)

Naturally false positives and false negatives may be bad to differing degrees; there’s choices to make when you proceed to actually try to quantify predictive accuracy of a classifier for ranking purposes for that reason.

And it’s possible you might find “reject everything” outperforms everything else available if false positives are bad enough. But if you do there are simpler ways to describe and define it than this.

I’m pretty sure Chesterton’s Fence, when used correctly, is just a prior that the structure of systems with humans in them is already subject to optimization and you need to account for the system and what its past members were trying to accomplish lest you be taken by surprise by some unknown consequences of the rule you’re changing.

The following two patterns seem very different to me:

  • Some agent designed a system that seems sub-optimal. They almost certainly built it to solve a problem; if you change it, you may re-introduce the problem.
  • Some non-agentic process resulted in a system that looks sub-optimal, and it’s been around for long enough that people have built atop it. If you change it, who knows what’ll come tumbling down?

A fence is a great metaphor for the first pattern. The second pattern, which I had never been consciously aware of until today (thanks!), seems valuable to have a handle for too – but a fence seems like a misleading metaphor for it. Chesterton’s… um… Non-Optimized Foundational System.

I am good at naming things.

You’ve completely missed the point.

In the Americas, where manioc was first domesticated, societies who have relied on bitter varieties for thousands of years show no evidence of chronic cyanide poisoning. In the Colombian Amazon, for example, indigenous Tukanoans use a multistep, multiday processing technique that involves scraping, grating, and finally washing the roots in order to separate the fiber, starch, and liquid. Once separated, the liquid is boiled into a beverage, but the fiber and starch must then sit for two more days, when they can then be baked and eaten. Figure 7.1 shows the percentage of cyanogenic content in the liquid, fiber, and starch remaining through each major step in this processing.

Such processing techniques are crucial for living in many parts of Amazonia, where other crops are difficult to cultivate and often unproductive. However, despite their utility, one person would have a difficult time figuring out the detoxification technique. Consider the situation from the point of view of the children and adolescents who are learning the techniques. They would have rarely, if ever, seen anyone get cyanide poisoning, because the techniques work. And even if the processing was ineffective, such that cases of goiter (swollen necks) or neurological problems were common, it would still be hard to recognize the link between these chronic health issues and eating manioc. Most people would have eaten manioc for years with no apparent effects. Low cyanogenic varieties are typically boiled, but boiling alone is insufficient to prevent the chronic conditions for bitter varieties. Boiling does, however, remove or reduce the bitter taste and prevent the acute symptoms (e.g., diarrhea, stomach troubles, and vomiting).> So, if one did the common-sense thing and just boiled the high-cyanogenic manioc, everything would seem fine. Since the multistep task of processing manioc is long, arduous, and boring, sticking with it is certainly non-intuitive. Tukanoan women spend about a quarter of their day detoxifying manioc, so this is a costly technique in the short term. Now consider what might result if a self-reliant Tukanoan mother decided to drop any seemingly unnecessary steps from the processing of her bitter manioc. She might critically examine the procedure handed down to her from earlier generations and conclude that the goal of the procedure is to remove the bitter taste. She might then experiment with alternative procedures by dropping some of the more labor-intensive or time-consuming steps. She’d find that with a shorter and much less labor-intensive process, she could remove the bitter taste. Adopting this easier protocol, she would have more time for other activities, like caring for her children. Of course, years or decades later her family would begin to develop the symptoms of chronic cyanide poisoning.

Thus, the unwillingness of this mother to take on faith the practices handed down to her from earlier generations would result in sickness and early death for members of her family. Individual learning does not pay here, and intuitions are misleading. The problem is that the steps in this procedure are causally opaque—an individual cannot readily infer their functions, interrelationships, or importance. The causal opacity of many cultural adaptations had a big impact on our psychology.

Wait. Maybe I’m wrong about manioc processing. Perhaps it’s actually rather easy to individually figure out the detoxification steps for manioc? Fortunately, history has provided a test case. At the beginning of the seventeenth century, the Portuguese transported manioc from South America to West Africa for the first time. They did not, however, transport the age-old indigenous processing protocols or the underlying commitment to using those techniques. Because it is easy to plant and provides high yields in infertile or drought-prone areas, manioc spread rapidly across Africa and became a staple food for many populations. The processing techniques, however, were not readily or consistently regenerated. Even after hundreds of years, chronic cyanide poisoning remains a serious health problem in Africa. Detailed studies of local preparation techniques show that high levels of cyanide often remain and that many individuals carry low levels of cyanide in their blood or urine, which haven’t yet manifested in symptoms. In some places, there’s no processing at all, or sometimes the processing actually increases the cyanogenic content. On the positive side, some African groups have in fact culturally evolved effective processing techniques, but these techniques are spreading only slowly.

The point here is that cultural evolution is often much smarter than we are. Operating over generations as individuals unconsciously attend to and learn from more successful, prestigious, and healthier members of their communities, this evolutionary process generates cultural adaptations. Though these complex repertoires appear well designed to meet local challenges, they are not primarily the products of individuals applying causal models, rational thinking, or cost-benefit analyses. Often, most or all of the people skilled in deploying such adaptive practices do not understand how or why they work, or even that they “do” anything at all. Such complex adaptations can emerge precisely because natural selection has favored individuals who often place their faith in cultural inheritance—in the accumulated wisdom implicit in the practices and beliefs derived from their forbearers—over their own intuitions and personal experiences.

The medicine question is actually a good one as long as you allow information to be transmissible. Just as with the fence example, if you ask someone and they can say “yeah I built the fence 20 years ago to keep my cows in but now I stopped herding cows so do what you want” you dont actually have to figure everything about the fence out yourself.


You dont need to know all the metabolic pathways a drug effects before you decide to take it, if you know that a trustworthy person has actually researched it.


Or to hop analogies again, no one person could build a modern car but a modern person can understand one well enough to take it apart and put it back together assisted by fairly comprehensive knowledge of what every part is for and what it does. If you don’t know what all the parts do, don’t take the car apart thinking you can put it back better.

There is no trustworthy person who has researched these things in medicine. There is no team who can build a human body; the requested knowledge is not specialist, it is beyond the grasp of current science.

We still don’t understand the pathways SSRIs affect enough to know how they even have the effects we want, let alone what else they might do, I’m pretty sure we don’t understand brain pathways enough to know what all dopaminergic receptors affected by antipsychotic drugs do. We substitute experimentation with short term observation- on the scale of single digit years at most- for being able to formally reason how things work.

Because we are ridiculously far from being able to do that, and while in principle a formal understanding would be nice to have, there are pros to having medicine that we don’t want to forgo that outway the cons, so we use looser rules. Which I think are well argued to still be too strict.

And as it is in medicine, it is in sociology.

inferentialdistance:

jbeshir:

inferentialdistance:

plain-dealing-villain:

jbeshir:

jbeshir:

plain-dealing-villain:

jbeshir:

Chesterton’s fence says “norms are like fences; they were constructed out of specific intent, for specific, legible reasons, and understanding those reasons should be a prerequisite to changing norms, which is a totally reasonable thing to do, once you bring me a reasonably complete explanation of those reasons”.

This is just plain an awful model of reality; they’re often not constructed by any particular person or people’s specific intent, the reasons for their persistence may have to do with their effectiveness but they’re often complex, situational, and highly illegible and resistant to being made legible.

Accordingly, legibly describing the reasons held by the person who intentionally put them in place is often impossible (as no such person ever existed) and legibly describing the reasons for their persistence in a way everyone agrees is more or less complete is extremely difficult and an impractical bar to ever changing anything.

Keep reading

I think you’re misunderstanding Chesteron’s Fence quite badly. It does not claim that norms were constructed intentionally for explicit reasons. (And, as you note, that claim is false most of the time.) Despite being constructed implicitly and opaquely, norms aren’t constructed in a vacuum; there are reasons for them, even though few if any people could articulate those reasons.

To reasonably expect a better outcome, though, you must understand why the norm exists and what purpose it is serving. If you don’t understand or misunderstand what it is doing now, changing it blindly will only help things by chance.

You mention women’s suffrage as a reason it’s a bad heuristic; I counter that women’s suffrage in the US is an example of it being a good heuristic. The US women’s suffrage movement was heavily backed by the KKK, on the theory that white wives would vote what their husbands told them to and dirty immigrants would not. If they had followed Chesterton’s Fence and considered what purpose the restriction of the franchise was serving, they could have avoided putting their considerable political might behind the change that had the single biggest impact in killing their viability as a political organization.

Like most good heuristics, it’s value-neutral. Following it makes you better at achieving your goals, whatever they are.

For it to be a good heuristic it must oppose action when it is incorrect and support action when it is correct. It must discriminate between cases where action is good and bad.

My criticism is that it opposes action all the time, because the thing it demands be understood prior to change has never in the history of humanity been understood prior to a change being made. Citing examples where its opposition was correct in no way addresses the criticism, which is that it’s a heuristic that boils down to “never change anything” by setting an impossible prerequisite to change, based on an erroneous and moderately intellectually dishonest model that the prerequisite is remotely practical, masquerading as a request for reasonable due diligence.

The claim that it was correct in opposing women’s sufferage for some specific selfish subgroup at the time, does not change that it got the situation wrong, by for the vast bulk of people- including all those actually living in the same country as GK Chesterton- incorrectly opposing it, by being a complete stopped clock.

In practice people cite it when they intuit that more caution is good, and just forget it exists when they don’t intuit that, and they should just state their intuition rather than inconsistently appealing to an unsatisfiable standard.

Sometimes the intuition is right, but the argument never is; having the right conclusion occasionally doesn’t make the argument correct.

A new analogy worth promoting to a new top level post rather than just editing in: An equivalently formulated heuristic to Chesterton’s Fence where the error might be more intuitively obvious would be “never put any new medicine in your body until you understand the use of anything it changes, find out why those systems were created and understand their purpose”.

It’s equivalent to “never put any new medicine in your body ever”, as a heuristic goes, because the standard it describes has never and cannot viably be met. Humans are too complex.

And yet it sounds like a reasonable “heuristic” if you only look for cases where its opposition was right, and ignore all the ones where it was wrong, by assuming that on the latter it was only temporarily wrong and by getting more information it would have become right, resting on an implicit, incorrect supposition that the understanding it demands can be acquired cheaply and so its opposition would have been surmountable.

This is not to say that in any given context, “gather more data on what’s going on before trying the new medicine” is a wrong conclusion. But the standard being referenced to support it is impossible, and it’s only by failing to actually follow it and only citing it when it’s supported by an intuition that anyone could ever support use of any medicine ever.

“The advocates of this change haven’t produced an explanation for the creation and purpose of the previous status quo that is generally satisfying” is thus a bad argument for rejecting their change pending more information, even when that is actually a correct conclusion.

You’re still taking an excessively rigid view of what it means and a excessively high standard of what understanding means. It’s a very attainable standard and I’ve attained it multiple times in the last year when making important decisions.

For it to be a good heuristic it must oppose action when it is incorrect and support action when it is correct.

No, it just needs to result in more good done than bad relative to not adopting it.

…yes, and the way an heuristic, an algorithm for making decisions, accomplishes more good done than bad relative to not adopting it, is by distinguishing between good and bad decisions, in an accurate-enough-to-be-better-than-the-alternative manner.

I suppose I didn’t explicitly spell out the connection between predictive accuracy and consequential value for heuristics, but yes, the predictive accuracy of an heuristic affects its consequential value, although social effects from other people reacting to your adoption of it exist so I suppose they aren’t strictly equivalent.

At any rate, the point of that post was that you can’t demonstrate it makes more good done than otherwise by just pointing at true negatives; you also need to consider false negatives. (And in general you’d also need to consider both true and false positives, but “reject everything” has zero of both.)

Naturally false positives and false negatives may be bad to differing degrees; there’s choices to make when you proceed to actually try to quantify predictive accuracy of a classifier for ranking purposes for that reason.

And it’s possible you might find “reject everything” outperforms everything else available if false positives are bad enough. But if you do there are simpler ways to describe and define it than this.

You can’t judge the quality of a heuristic without doing the statistical analysis of the problems you encounter (i.e. base rates matter). Even highly accurate heuristics can be harmful if the base rates are sufficiently skewed against them. Even highly inaccurate heuristics can be beneficial if the base rates are sufficiently skewed towards them.

Chesterton’s Fence is the 100% correct argument that the appearance of uselessness is not equivalent to uselessness, that ignorance of the risks don’t make the risks go away. That the “use” in question is the intent of an agent, or an evolved adaptation, is irrelevant; only that an optimization force put it there, and mathematically, random changes overwhelmingly result in less effective  outcomes from a local optima.

The adoption of Chesterton’s Fence is not “do nothing”, it’s “manage your risk”. Yeah, most of the time it’s “don’t do the thing”: I should not eat the overwhelming majority of objects I find on the ground. The medical research industry is a great example of doing things despite the risk: animal testing before humans, limited testing on humans before being available to everyone.

One of the basics is, if you’re going to tear down the fence, be prepared to put it back up. This requires being able to admit that a change in norm can have negative consequences, doing a proper cost/benefit analysis, and actually undoing the norm change if the analysis indicates that that’s the best course of action. This concept is woefully uncommon among those demanding change.

One of the others is that, if the thing you’re trying to change is the result of evolution, your change has to be at least as adaptive as the original. Otherwise the same impersonal forces that generated the original state will undo your change. And if that happens? Accept it and move on; doubling down on forcing the change is only going to waste resources and/or hurt people caught in the middle of these things.

Yes, if you make up whatever argument you want and call it “Chesterton’s Fence”, without any relation to the original author’s argument, its common usage, rhetorical effect of its reference, or the practical standards people are held to in rebutting such a reference, it can be a good argument.

inferentialdistance:

plain-dealing-villain:

jbeshir:

jbeshir:

plain-dealing-villain:

jbeshir:

Chesterton’s fence says “norms are like fences; they were constructed out of specific intent, for specific, legible reasons, and understanding those reasons should be a prerequisite to changing norms, which is a totally reasonable thing to do, once you bring me a reasonably complete explanation of those reasons”.

This is just plain an awful model of reality; they’re often not constructed by any particular person or people’s specific intent, the reasons for their persistence may have to do with their effectiveness but they’re often complex, situational, and highly illegible and resistant to being made legible.

Accordingly, legibly describing the reasons held by the person who intentionally put them in place is often impossible (as no such person ever existed) and legibly describing the reasons for their persistence in a way everyone agrees is more or less complete is extremely difficult and an impractical bar to ever changing anything.

Keep reading

I think you’re misunderstanding Chesteron’s Fence quite badly. It does not claim that norms were constructed intentionally for explicit reasons. (And, as you note, that claim is false most of the time.) Despite being constructed implicitly and opaquely, norms aren’t constructed in a vacuum; there are reasons for them, even though few if any people could articulate those reasons.

To reasonably expect a better outcome, though, you must understand why the norm exists and what purpose it is serving. If you don’t understand or misunderstand what it is doing now, changing it blindly will only help things by chance.

You mention women’s suffrage as a reason it’s a bad heuristic; I counter that women’s suffrage in the US is an example of it being a good heuristic. The US women’s suffrage movement was heavily backed by the KKK, on the theory that white wives would vote what their husbands told them to and dirty immigrants would not. If they had followed Chesterton’s Fence and considered what purpose the restriction of the franchise was serving, they could have avoided putting their considerable political might behind the change that had the single biggest impact in killing their viability as a political organization.

Like most good heuristics, it’s value-neutral. Following it makes you better at achieving your goals, whatever they are.

For it to be a good heuristic it must oppose action when it is incorrect and support action when it is correct. It must discriminate between cases where action is good and bad.

My criticism is that it opposes action all the time, because the thing it demands be understood prior to change has never in the history of humanity been understood prior to a change being made. Citing examples where its opposition was correct in no way addresses the criticism, which is that it’s a heuristic that boils down to “never change anything” by setting an impossible prerequisite to change, based on an erroneous and moderately intellectually dishonest model that the prerequisite is remotely practical, masquerading as a request for reasonable due diligence.

The claim that it was correct in opposing women’s sufferage for some specific selfish subgroup at the time, does not change that it got the situation wrong, by for the vast bulk of people- including all those actually living in the same country as GK Chesterton- incorrectly opposing it, by being a complete stopped clock.

In practice people cite it when they intuit that more caution is good, and just forget it exists when they don’t intuit that, and they should just state their intuition rather than inconsistently appealing to an unsatisfiable standard.

Sometimes the intuition is right, but the argument never is; having the right conclusion occasionally doesn’t make the argument correct.

A new analogy worth promoting to a new top level post rather than just editing in: An equivalently formulated heuristic to Chesterton’s Fence where the error might be more intuitively obvious would be “never put any new medicine in your body until you understand the use of anything it changes, find out why those systems were created and understand their purpose”.

It’s equivalent to “never put any new medicine in your body ever”, as a heuristic goes, because the standard it describes has never and cannot viably be met. Humans are too complex.

And yet it sounds like a reasonable “heuristic” if you only look for cases where its opposition was right, and ignore all the ones where it was wrong, by assuming that on the latter it was only temporarily wrong and by getting more information it would have become right, resting on an implicit, incorrect supposition that the understanding it demands can be acquired cheaply and so its opposition would have been surmountable.

This is not to say that in any given context, “gather more data on what’s going on before trying the new medicine” is a wrong conclusion. But the standard being referenced to support it is impossible, and it’s only by failing to actually follow it and only citing it when it’s supported by an intuition that anyone could ever support use of any medicine ever.

“The advocates of this change haven’t produced an explanation for the creation and purpose of the previous status quo that is generally satisfying” is thus a bad argument for rejecting their change pending more information, even when that is actually a correct conclusion.

You’re still taking an excessively rigid view of what it means and a excessively high standard of what understanding means. It’s a very attainable standard and I’ve attained it multiple times in the last year when making important decisions.

For it to be a good heuristic it must oppose action when it is incorrect and support action when it is correct.

No, it just needs to result in more good done than bad relative to not adopting it.

…yes, and the way an heuristic, an algorithm for making decisions, accomplishes more good done than bad relative to not adopting it, is by distinguishing between good and bad decisions, in an accurate-enough-to-be-better-than-the-alternative manner.

I suppose I didn’t explicitly spell out the connection between predictive accuracy and consequential value for heuristics, but yes, the predictive accuracy of an heuristic affects its consequential value, although social effects from other people reacting to your adoption of it exist so I suppose they aren’t strictly equivalent.

At any rate, the point of that post was that you can’t demonstrate it makes more good done than otherwise by just pointing at true negatives; you also need to consider false negatives. (And in general you’d also need to consider both true and false positives, but “reject everything” has zero of both.)

Naturally false positives and false negatives may be bad to differing degrees; there’s choices to make when you proceed to actually try to quantify predictive accuracy of a classifier for ranking purposes for that reason.

And it’s possible you might find “reject everything” outperforms everything else available if false positives are bad enough. But if you do there are simpler ways to describe and define it than this.

plain-dealing-villain:

jbeshir:

jbeshir:

plain-dealing-villain:

jbeshir:

Chesterton’s fence says “norms are like fences; they were constructed out of specific intent, for specific, legible reasons, and understanding those reasons should be a prerequisite to changing norms, which is a totally reasonable thing to do, once you bring me a reasonably complete explanation of those reasons”.

This is just plain an awful model of reality; they’re often not constructed by any particular person or people’s specific intent, the reasons for their persistence may have to do with their effectiveness but they’re often complex, situational, and highly illegible and resistant to being made legible.

Accordingly, legibly describing the reasons held by the person who intentionally put them in place is often impossible (as no such person ever existed) and legibly describing the reasons for their persistence in a way everyone agrees is more or less complete is extremely difficult and an impractical bar to ever changing anything.

Keep reading

I think you’re misunderstanding Chesteron’s Fence quite badly. It does not claim that norms were constructed intentionally for explicit reasons. (And, as you note, that claim is false most of the time.) Despite being constructed implicitly and opaquely, norms aren’t constructed in a vacuum; there are reasons for them, even though few if any people could articulate those reasons.

To reasonably expect a better outcome, though, you must understand why the norm exists and what purpose it is serving. If you don’t understand or misunderstand what it is doing now, changing it blindly will only help things by chance.

You mention women’s suffrage as a reason it’s a bad heuristic; I counter that women’s suffrage in the US is an example of it being a good heuristic. The US women’s suffrage movement was heavily backed by the KKK, on the theory that white wives would vote what their husbands told them to and dirty immigrants would not. If they had followed Chesterton’s Fence and considered what purpose the restriction of the franchise was serving, they could have avoided putting their considerable political might behind the change that had the single biggest impact in killing their viability as a political organization.

Like most good heuristics, it’s value-neutral. Following it makes you better at achieving your goals, whatever they are.

For it to be a good heuristic it must oppose action when it is incorrect and support action when it is correct. It must discriminate between cases where action is good and bad.

My criticism is that it opposes action all the time, because the thing it demands be understood prior to change has never in the history of humanity been understood prior to a change being made. Citing examples where its opposition was correct in no way addresses the criticism, which is that it’s a heuristic that boils down to “never change anything” by setting an impossible prerequisite to change, based on an erroneous and moderately intellectually dishonest model that the prerequisite is remotely practical, masquerading as a request for reasonable due diligence.

The claim that it was correct in opposing women’s sufferage for some specific selfish subgroup at the time, does not change that it got the situation wrong, by for the vast bulk of people- including all those actually living in the same country as GK Chesterton- incorrectly opposing it, by being a complete stopped clock.

In practice people cite it when they intuit that more caution is good, and just forget it exists when they don’t intuit that, and they should just state their intuition rather than inconsistently appealing to an unsatisfiable standard.

Sometimes the intuition is right, but the argument never is; having the right conclusion occasionally doesn’t make the argument correct.

A new analogy worth promoting to a new top level post rather than just editing in: An equivalently formulated heuristic to Chesterton’s Fence where the error might be more intuitively obvious would be “never put any new medicine in your body until you understand the use of anything it changes, find out why those systems were created and understand their purpose”.

It’s equivalent to “never put any new medicine in your body ever”, as a heuristic goes, because the standard it describes has never and cannot viably be met. Humans are too complex.

And yet it sounds like a reasonable “heuristic” if you only look for cases where its opposition was right, and ignore all the ones where it was wrong, by assuming that on the latter it was only temporarily wrong and by getting more information it would have become right, resting on an implicit, incorrect supposition that the understanding it demands can be acquired cheaply and so its opposition would have been surmountable.

This is not to say that in any given context, “gather more data on what’s going on before trying the new medicine” is a wrong conclusion. But the standard being referenced to support it is impossible, and it’s only by failing to actually follow it and only citing it when it’s supported by an intuition that anyone could ever support use of any medicine ever.

“The advocates of this change haven’t produced an explanation for the creation and purpose of the previous status quo that is generally satisfying” is thus a bad argument for rejecting their change pending more information, even when that is actually a correct conclusion.

You’re still taking an excessively rigid view of what it means and a excessively high standard of what understanding means. It’s a very attainable standard and I’ve attained it multiple times in the last year when making important decisions.

The definition of understanding I use, is the definition of “understanding” by which I would be willing to say “I understand [institution/norm] and the uses it has and the reasons for which it was created” amongst my peers and not expect to be mocked resoundingly for making such a ridiculous claim as actually understanding the uses of a complex institution and the reasons for which it was created. That is, it is the same definition of understanding I use elsewhere.

This also appears to be the standard used whenever someone criticises someone else for failing to abide by Chesterton’s Fence. It is simply taken as an unevidenced given that the people don’t know what they’re changing, because no one does, and any claims that they actually do would not be taken seriously, and any claims that they think they do would result in accusations of arrogance, and yet this isn’t taken as reason to think the demand is unrealistic. It’s pure rhetorical superweapon.

As a referenced standard expected of others, in order to meet it, it is also necessary that the understanding be accepted by others, so it needs to be an understanding that other people will affirm as correct. Coming to a model that people more or less agree isn’t missing anything important and accept as you understanding the topic in question is ridiculously impractical for any interesting social institution.


Beyond local use, Chesterton’s first example, directly after the concept of CF was originally defined in The Thing, in the chapter The Drift From Domesticity, was criticising people for wanting to make changes (involving reducing the expected authority of elders over younger people) that in his view partially abolished the Household/The Home/the family. Obviously, everyone has an idea of what a family does and is; clearly, the standard he expected was more than having such an idea.

But, anyhow, this is the right end at which to begin any such enquiry; and not at the tail-end or the fag-end of some private muddle, by which Dick has become discontented or Susan has gone off on her own. If Dick or Susan wish to destroy the family because they do not see the use of it, I say as I said in the beginning; if they do not see the use of it, they had much better preserve it.

“Sorry, Susan, you’re wrong to leave your abusive parents until you can demonstrate an understanding of the abstract concept of family sufficient to please me, a person who takes your intention to leave itself as sufficient evidence that you don’t have that understanding. Until you can see what a family is good for you can’t destroy it, you see.”

It’s not me that invented the unreasonable standard; it’s GK Chesterton.

jbeshir:

plain-dealing-villain:

jbeshir:

Chesterton’s fence says “norms are like fences; they were constructed out of specific intent, for specific, legible reasons, and understanding those reasons should be a prerequisite to changing norms, which is a totally reasonable thing to do, once you bring me a reasonably complete explanation of those reasons”.

This is just plain an awful model of reality; they’re often not constructed by any particular person or people’s specific intent, the reasons for their persistence may have to do with their effectiveness but they’re often complex, situational, and highly illegible and resistant to being made legible.

Accordingly, legibly describing the reasons held by the person who intentionally put them in place is often impossible (as no such person ever existed) and legibly describing the reasons for their persistence in a way everyone agrees is more or less complete is extremely difficult and an impractical bar to ever changing anything.

Keep reading

I think you’re misunderstanding Chesteron’s Fence quite badly. It does not claim that norms were constructed intentionally for explicit reasons. (And, as you note, that claim is false most of the time.) Despite being constructed implicitly and opaquely, norms aren’t constructed in a vacuum; there are reasons for them, even though few if any people could articulate those reasons.

To reasonably expect a better outcome, though, you must understand why the norm exists and what purpose it is serving. If you don’t understand or misunderstand what it is doing now, changing it blindly will only help things by chance.

You mention women’s suffrage as a reason it’s a bad heuristic; I counter that women’s suffrage in the US is an example of it being a good heuristic. The US women’s suffrage movement was heavily backed by the KKK, on the theory that white wives would vote what their husbands told them to and dirty immigrants would not. If they had followed Chesterton’s Fence and considered what purpose the restriction of the franchise was serving, they could have avoided putting their considerable political might behind the change that had the single biggest impact in killing their viability as a political organization.

Like most good heuristics, it’s value-neutral. Following it makes you better at achieving your goals, whatever they are.

For it to be a good heuristic it must oppose action when it is incorrect and support action when it is correct. It must discriminate between cases where action is good and bad.

My criticism is that it opposes action all the time, because the thing it demands be understood prior to change has never in the history of humanity been understood prior to a change being made. Citing examples where its opposition was correct in no way addresses the criticism, which is that it’s a heuristic that boils down to “never change anything” by setting an impossible prerequisite to change, based on an erroneous and moderately intellectually dishonest model that the prerequisite is remotely practical, masquerading as a request for reasonable due diligence.

The claim that it was correct in opposing women’s sufferage for some specific selfish subgroup at the time, does not change that it got the situation wrong, by for the vast bulk of people- including all those actually living in the same country as GK Chesterton- incorrectly opposing it, by being a complete stopped clock.

In practice people cite it when they intuit that more caution is good, and just forget it exists when they don’t intuit that, and they should just state their intuition rather than inconsistently appealing to an unsatisfiable standard.

Sometimes the intuition is right, but the argument never is; having the right conclusion occasionally doesn’t make the argument correct.

A new analogy worth promoting to a new top level post rather than just editing in: An equivalently formulated heuristic to Chesterton’s Fence where the error might be more intuitively obvious would be “never put any new medicine in your body until you understand the use of anything it changes, find out why those systems were created and understand their purpose”.

It’s equivalent to “never put any new medicine in your body ever”, as a heuristic goes, because the standard it describes has never and cannot viably be met. Humans are too complex.

And yet it sounds like a reasonable “heuristic” if you only look for cases where its opposition was right, and ignore all the ones where it was wrong, by assuming that on the latter it was only temporarily wrong and by getting more information it would have become right, resting on an implicit, incorrect supposition that the understanding it demands can be acquired cheaply and so its opposition would have been surmountable.

This is not to say that in any given context, “gather more data on what’s going on before trying the new medicine” is a wrong conclusion. But the standard being referenced to support it is impossible, and it’s only by failing to actually follow it and only citing it when it’s supported by an intuition that anyone could ever support use of any medicine ever.

“The advocates of this change haven’t produced an explanation for the creation and purpose of the previous status quo that is generally satisfying” is thus a bad argument for rejecting their change pending more information, even when that is actually a correct conclusion.

plain-dealing-villain:

jbeshir:

Chesterton’s fence says “norms are like fences; they were constructed out of specific intent, for specific, legible reasons, and understanding those reasons should be a prerequisite to changing norms, which is a totally reasonable thing to do, once you bring me a reasonably complete explanation of those reasons”.

This is just plain an awful model of reality; they’re often not constructed by any particular person or people’s specific intent, the reasons for their persistence may have to do with their effectiveness but they’re often complex, situational, and highly illegible and resistant to being made legible.

Accordingly, legibly describing the reasons held by the person who intentionally put them in place is often impossible (as no such person ever existed) and legibly describing the reasons for their persistence in a way everyone agrees is more or less complete is extremely difficult and an impractical bar to ever changing anything.

Keep reading

I think you’re misunderstanding Chesteron’s Fence quite badly. It does not claim that norms were constructed intentionally for explicit reasons. (And, as you note, that claim is false most of the time.) Despite being constructed implicitly and opaquely, norms aren’t constructed in a vacuum; there are reasons for them, even though few if any people could articulate those reasons.

To reasonably expect a better outcome, though, you must understand why the norm exists and what purpose it is serving. If you don’t understand or misunderstand what it is doing now, changing it blindly will only help things by chance.

You mention women’s suffrage as a reason it’s a bad heuristic; I counter that women’s suffrage in the US is an example of it being a good heuristic. The US women’s suffrage movement was heavily backed by the KKK, on the theory that white wives would vote what their husbands told them to and dirty immigrants would not. If they had followed Chesterton’s Fence and considered what purpose the restriction of the franchise was serving, they could have avoided putting their considerable political might behind the change that had the single biggest impact in killing their viability as a political organization.

Like most good heuristics, it’s value-neutral. Following it makes you better at achieving your goals, whatever they are.

For it to be a good heuristic it must oppose action when it is incorrect and support action when it is correct. It must discriminate between cases where action is good and bad.

My criticism is that it opposes action all the time, because the thing it demands be understood prior to change has never in the history of humanity been understood prior to a change being made. Citing examples where its opposition was correct in no way addresses the criticism, which is that it’s a heuristic that boils down to “never change anything” by setting an impossible prerequisite to change, based on an erroneous and moderately intellectually dishonest model that the prerequisite is remotely practical, masquerading as a request for reasonable due diligence.

The claim that it was correct in opposing women’s sufferage for some specific selfish subgroup at the time, does not change that it got the situation wrong, by for the vast bulk of people- including all those actually living in the same country as GK Chesterton- incorrectly opposing it, by being a complete stopped clock.

In practice people cite it when they intuit that more caution is good, and just forget it exists when they don’t intuit that, and they should just state their intuition rather than inconsistently appealing to an unsatisfiable standard.

Sometimes the intuition is right, but the argument never is; having the right conclusion occasionally doesn’t make the argument correct.

Chesterton’s fence says “norms are like fences; they were constructed out of specific intent, for specific, legible reasons, and understanding those reasons should be a prerequisite to changing norms, which is a totally reasonable thing to do, once you bring me a reasonably complete explanation of those reasons”.

This is just plain an awful model of reality; they’re often not constructed by any particular person or people’s specific intent, the reasons for their persistence may have to do with their effectiveness but they’re often complex, situational, and highly illegible and resistant to being made legible.

Accordingly, legibly describing the reasons held by the person who intentionally put them in place is often impossible (as no such person ever existed) and legibly describing the reasons for their persistence in a way everyone agrees is more or less complete is extremely difficult and an impractical bar to ever changing anything.

At its best, Chesterton’s Fence is thus a bad argument (”we don’t have a legible explanation for who put the norms there why and we should”) for a good conclusion (”we should do more explicit modelling of what the norms are doing before we change them”) that people who believe should say without the bad argument.

At its mediocre, it acts as a push to not change things unless there’s someone who cares strongly enough, which is a reasonable heuristic in the actual world where producing full, generally accepted explanations for why norms are as they are is impractical. Rather than doing this through the “wait until someone cares enough to produce the demanded explanation” mechanism it proposes, it actually does this through a “wait until people are willing to just say ‘fuck you’ to people bringing up Chesterton Fence” mechanism, which is not particularly great discourse. It would be better to just argue “we should err conservative unless change is sufficiently good” directly.

At its worst, it’s just plain disingenuous (”I’d love for you to make this change but we should have a full accounting of why the old thing exists first so I need to reluctantly oppose until you bring me it no I’m not going to do any effort to produce it myself :) :) :)”).

This isn’t a recent degeneration of Chesterton’s Fence; GK Chesterton was an early 20th century social conservative, he opposed permitting women to vote, anyone trying to make his arguments into norms which permit good changes but forbid bad ones is trying to make them do things their originator never made them do. It was always a bad argument, and trying to rescue it is not a matter of properly understanding nuance that was always there, but inventing nuance of your own.

[Epistemic status: Tentative]

I’ve been pondering a cluster of actions that I find kind of alarming, and observe others finding even more extremely alarming and hostile, and trying to work out what they have in common and why they come off as somewhere between “kind of threatening” and “extremely hostile” to the people who aren’t into them.

They include:

And I’ve arrived at a tentative model: I think some people refrain from consumption of some sort, and whenever they notice the resulting lack, they feel more positive about being a good person than they feel sad for the lack. The greater the lack, the more it makes them feel like a good person, so it scales to maintain that property within a moderate range. Resultingly, a sense of deprivation becomes a pleasant reminder of their own goodness. I don’t think this is as simple as “person hates fun”; I am prepared to believe that being in an environment which conspicuously denies them hedonic pleasure or even inflicts unpleasantness while declaring them a good person for undergoing that is short-term positive feeling for them.

The problem, I think is when these people interact with others. They find it easier to be their better selves if the environment stops them engaging in guilty pleasures because they actually feel guilt over them rather than finding the concept of a guilty pleasure to be a funny joke, so they want the social environment to enforce the minimalism. This makes stuff worse for everyone else! Also, from their perspective, consumption doesn’t actually generate more value for humans (because while it produces more positive hedons, it loses the moral good feels) and so to want to generate value from it is selfish for no reason, and thus gets essentially zero sympathy and a lot of anger.

There is some argument I suppose that there’s a weird efficiency trick to gaining utility by not consuming resources, and thus insofar as you could choose to be a moral masochistic minimalist it’s easier to be fulfilled. But if you were prepared to do value editing for convenience you’d just want to wirehead anyway, so I’m going to assume you’re not; certainly, I’m disinclined.

I think it helps manage this conflict to notice it exists, and adjust strategies accordingly. For non-moral masochistic minimalists, who I think are the majority, this means expecting that hedonically costly restrictions will be proposed when they aren’t actually necessary, and need to be pushed back on by default even if we agree with many of the end causes they are purportedly for. Hopefully with the result of getting the moral masochistic minimalists to build the environments they want out of something other than state/nationwide regulations and leave us alone to sit in our high flow rate showers, eating our delicious cola-soaked high caffeine bacon with the fat left on in peace. For moral masochistic minimalists, it might mean considering that some humans do not easily gain value from deprivation-for-a-good-cause, and thus only want to undergo deprivation if it’s the least costly way to accomplish a cause they care about rather than it being an essentially free move, and if making humans be happy is a goal then making strategies for that which don’t presume everyone is/can easily be a moral masochistic minimalist.

Relations to existing discourse areas I noticed but think this has already gone on long enough for without digging into: 

Link to puritanism; I think this actually is puritanism and what I’m actually doing is proposing a model for how it works.

Links to the perennial discourse on whether to deprive oneself of ‘problematic’ content - seems intuitively linked but it’s not really my wheelhouse- I’ve never been in enough business of caring what people thought to pay very close attention.

Links to white men writing terribly insulting things about white men; the standard model is that everyone doing this sort of thing is mentally excluding themselves, but I think there’s plausibly a “this hurts me but I feel more good by supporting it than it hurts” element.

Links to veganism vs carnist discourse - could be linked to the thing where people claim the food is just as good, if they’re counting moral feels into the equation, it’s at least as plausible as my previous hypothesis that it’s purely human taste sense variation.

Links to moral realism discourse, debate over how healthy depriving oneself for morality is long-term, etc. I think a lot of people seem to have kind of concerning internal conflicts which hurt them as a result of this kind of thinking when they want a hedonic experience, and somepeople at least should get away from this thinking as much as possible. Maybe it becomes toxic when you feel like a bad person for not doing the deprivation rather than a good person for doing it, or something. Even at epistemic status “tentative” I’m not sufficiently confident to say it is bad for everyone, though.

intrigue-posthaste-please:

theunitofcaring:

A couple months ago I left Friday evening, after work, for a trip up the coast with my girlfriend @suspected-spinozist. We drove up to Mendocino and spent the weekend hiking along the coast and exploring botanical gardens and having a lovely time, and then drove back down for work Monday.

I was basically useless the whole next week. I’d predicted that would happen, and I thought it’d be worth it (and it was absolutely worth it.) When I do things, I am spending my ability to do things. If I do things all weekend, I will find it nearly impossible to get anything done all week. I know exactly how much energy for extraneous tasks I have, and if I spend it I will start failing at my non-extraneous tasks, and if I push that I will start failing to eat. 

Because this is my experience of the world, resource conservationmodels of disability are super relatable to me. I experience really sharp tradeoffs between all of the things I care about. I frequently say no to doing something cool or fun or interesting because I need to save the energy. I have limited ability to do stuff, it regenerates slowly, and having to do stuff when I’m out of ability-to-do-stuff will set me back for even longer. For that reason, I spend lots of my energy on resource conservation - thinking and planning how to do as little stuff as possible while staying on top of my life.

The most common conservation model of disability is the ‘spoons’ one that originated in the chronic illness community. There’s been a lot of arguing over who gets to lay claim to ‘spoons’, but certainly anyone can lay claim to a resource conservation model in general. 

I talked recently to someone whose brain works very differently from mine. If they have the structures in place that they need to succeed, they will just keep on being able to do stuff until one of those structures breaks down. They can pack their weekend and then work all week; they can have something after work every single night. But if a structure crumbles on them, suddenly they can’t do much of anything. 

The person I talked to was familiar with resource conservation models, and this really harmed them when their structures crumbled. They found advice to cut back on the stuff they were doing, save energy, commit to the minimum necessary, cancel plans. And none of that helped, plus it’s actually really depressing and isolating to do the absolute minimum you need to survive every day, so they ended up just as stuck and now without any of the things that made them happy. 

So I think there are people who, instead of a conservation model, benefit from a momentum model - they have a state in which they can get stuff done, and once they’ve built up the structures they’ve need they can just stay there and add stuff to the structure. If they lose their ability to do things there’s a structure that needs replacing - cutting back in general won’t help.

In practice, almost everyone is probably a mixture of these things. Even people who mostly run on momentum would probably hit the point where their ability to do stuff traded off against their ability to do other stuff if, say, they were cutting back on their sleep to crowd more things into their day. Even people who have to shepherd their resources really carefully sometimes have things (like blogging, for me) which are easy and effortless as long as it’s part of their daily routine. And I bet there are people who need to resource-conserve for physical activity but whose socialization or intellectual output is best modeled as a momentum thing, or conversely people who can exercise every day as long as it’s part of their routine but need to carefully plan when they’ll have to expend willpower on tasks like writing.

So it’s probably good to have both models in your head - both because they could both apply to you, in different contexts, and because they will definitely both apply to some people you’re giving advice to.

YES THIS! This is often why the “self-care” thing doesn’t work for me. When people talk about self-care they’re often talking about doing things outside of your normal routine that will replenish your energy so you can get back into the swing of things. But frequently for me, doing that just pushes me further off track and reduces my momentum even further.

Even that, though, varies depending on where I am in my bipolar cycle. You mentioned people who need to resource-conserve for some activities but maintain momentum for other activities. I think I need to resource-conserve more when I’m hypomanic but maintain momentum more when I’m depressed.

When I’m *really* depressed, “momentum” might just mean keeping to a bare minimum of daily tasks: taking meds, going outside once, having one conversation over any medium with another person. What won’t really help is classic self-care activities like taking a bath or spending time with a good friend.

When I’m really hypomanic, I have the impulse to structure my day, create new systems, pack my calendar full of daily reminders, routines, lists of things to do - but adhering to that structure won’t even me out; it’ll just make me more obsessive. What I need to do in that case is step back, meditate, do yoga, spend less energy than I think I’ve got because it’ll run out faster than I think.

I’m very heavily momentum-based, although I’ve developed strategies such that I regain momentum within a few days if thrown out of sorts nowadays (roughly, my structure is encoded into external task management and chained-task systems and I just need to get back to them and they’ll restore it).

I do notice resource limits- today I did 236 anki cards on software development and neuroscience in 57 minutes, including 60 new ones. I could not have done 460 including 120 new ones in two hours. I was whoosy and felt like I’d been seriously exercising things afterwards.

But I could absolutely then go do my start of day shower and revisit the question of attaching a painting to a wall now I have new better adhesive strips and tidy. Similarly, on one night after a hackathon when I could barely speak coherently I could still do my Duolingo, not through heroic effort but because apparently whatever cognitive resources are involved in practicing French don’t degrade at 36 hours awake much, at least if not heavily used, so it was just kind of a pleasant experience of doing mildly challenging tasks while my phone dings at me and tells me I got points as usual.

Insofar as I have resource limits it’s more like one of those convoluted games with twelve separate resources and I mask them by rapidly switching between things drawing from more than one, than any kind of overall executive functioning limit. Multitasking and switching between types of tasks a lot in my case means the salient limiter is mostly the momentum one, because that does transfer heavily.

Petrov Day in Oxford was a lot of fun! I mean, Seattle converted us all to nuclear ash and, additionally, melted our cake, but you can’t have everything. It was a lot of fun to spend an evening camping with other EA/rationalist sorts from Europe, do some rituals for the day, roast sausages and marshmallows on a fire, and the next day flee from the rain into the cafe and play board games and Werewolf for a bit. I’m looking forward to running it again next year, and not looking forward to cleaning the roasting skewers.

Relatedly, I’m looking at getting the ball rolling (slowly) on helping with organising Winter Solstice, and got the idea that it might be nice to have a Discord server for planning events with other European rat/EA types (it would be nice to talk with anyone else going to EAG London too!), and also as a place where we can socially interact with people who aren’t at events but are still less than a transatlantic flight away, so I’ve spun up a server for European Rationalists/EAs to talk about event planning and meet each other, and if you’re in Europe and even vaguely adjacent it’d be nice to talk with you on it: https://discord.gg/E6MNwGN

loading