True Masters

I want both the Hindus and Mussalmans to cultivate the cool courage to die without killing. But if one has not that courage, I want him to cultivate the art of killing and being killed rather than, in a cowardly manner, flee from danger. For the latter, in spite of his flight, does commit mental himsa. He flees because he has not the courage to be killed in the act of killing.

My method of nonviolence can never lead to loss of strength, but it alone will make it possible, if the nation wills it, to offer disciplined and concerted violence in time of danger.

My creed of nonviolence is an extremely active force. It has no room for cowardice or even weakness. There is hope for a violent man to be some day non-violent, but there is none for a coward. I have, therefore, said more than once….that, if we do not know how to defend ourselves, our women and our places of worship by the force of suffering, i.e., nonviolence, we must, if we are men, be at least able to defend all these by fighting.

Between Cowardice and Violence by Ghandi

Many people are surprised that Gandhi would explicitly say that violence is preferable to cowardice. Gandhi is often cited by those denouncing ‘violent’ opposition to injustice. But Gandhi did not have some strict deontological opposition to violence. Gandhi believed that non-violence was the best way to win the struggle against injustice. And that the society that results from non-violent revolution would be superior to the result of violent revolution.

In spite, however, of the madness and the vain imitation of the tinsel of the West, the hope lingers in me and many others that India shall survive this death dance and occupy the moral height that should belong to her after the training, however imperfect, in nonviolence for an unbroken period of thirty-two years since 1915.

An India reduced in size but purged in spirit may still be the nursery of the nonviolence of the brave and take up the moral leadership of the world, bringing a message of hope and deliverance to the oppressed and exploited races. But an unwieldy, soul-less India will merely be an imitation, and a third-rate imitation at that, of the Western military States, utterly powerless to stand up against their onslaught. I have no desire to outlive the India of my dreams.

India and the Violent Way by Ghandi

Never the less, Gandhi’s conception of non-violence was quite expansive. He believed in radical criminal justice reform. It is much harder to support a principle if you don’t truly think it works. Gandi believed in an India free of both colonialism and internal oppression. He thought non-violence was the best way to achieve his dream.

Polite Order

Anyone else concerned that US politics has suddenly collapsed from civil arguments over redistribution amounts & mechanisms (left vs right) to street violence between rival gangs?
Polite order is my ingroup. Civil unrest is my outgroup. This rapid transformation and the loss of shared anti-violence culture horrifies me. The threat to the lives of the urban underprivileged from a collapse of civil order is VASTLY greater than the minuscule number of unjust murders by police. These rioting fools are ignoring the history of failed state body counts.
— Patri Friedman, Facebook
Patri Friedman is the founder and current board chairman of the Seasteading Institute. He is the grandson of the famous Libertarian economist Milton Friedman and son of the ancap economist David Friedman. He has gotten substantial funding from Peter Thiel. He is a pretty influential person in rationalist-y libertarian circles. And many people have said similar things. The point of view is hard for me to understand.

It seems to me that consistent supporters of freedom should be ecstatic about Black Lives Matter. The USA incarceration rate is truly ridiculous. We incarcerate at rates 6.3 times higher than the EU. The white incarceration rate is four times and the Black rate is twenty-two the EU average.  About a third of black men have a felony conviction. This represented a truly enormous loss of human freedom.

A serious obstacle to change is that the police were considered rather trustworthy and honest by the public. In 20019 54 percent of people said the police had high or very high honesty and integrity. Of course, police lie all the time. Police also take more property than robbers. In many circles, the famous Don’t Talk the Police video is considered common-sense advice. We know the police cannot be trusted, it would be better if the wider public knew. BLM is starting to change the public perception of the police. We cannot protect our rights until the public becomes much more skeptical of the police. Even if you are not concerned about racism you should enthusiastically support BLM if you truly prioritize freedom.

You can only serve so many masters, perhaps just one. In theory, Patri is in favor of human freedom. But it seems to me his true master is ‘polite order’. It is clear how Patri will side if freedom trades off against order and property rights. You cannot model people based on what they abstractly support. You need to know which masters they serve.

 

 

 

Animal Rights and AI Risk

A few weeks ago I was protesting outside a pig slaughterhouse. I thought about GPT-3 a bit. I am afraid of a bad singularity, but the bad singularity has already come for the pigs with the explosion of human intelligence, and as a result, everyone they love is definitely going to die horribly. The idea that technology might lead to hell on earth is not a theoretical problem. It’s happened before. I wonder how well a future singularity is going to go for people like me.

Many people are rightly concerned about technical AI alignment. Successful technical alignment would mean that each AI system is permanently aligned with the interest of some subset of humans. Is this sufficient to be confident of a good future? There are still serious risks even if the dominant coalition of AIs is aligned with something like the collective opinion of humanity (this would include an aligned singleton).

Here are some reasons why the future might be full of astronomical suffering:

  • Economic productivity: Suffering might be instrumental in achieving high economic output. Animal suffering in factory farms is a case in point: it just so happens that the most economically efficient way to satisfy the demand for cheap meat involves a lot of suffering. This is not currently an s-risk because it’s not astronomical in scope, but it’s possible that future technology will enable similar structures on a much larger scale. For instance, the fact that evolution uses pain suggests that learning might be more efficient if negative reward signals are also used, and we might consider sufficiently advanced and complex reinforcement learners to be capable of suffering.
  • Information gain: Experiments on humans or other sentient creatures might be useful for scientific purposes (like animal testing), while causing harm to those experimented on. Again, future technology may enable such practices on a much larger scale as it may become possible to run a large number of simulations of artificial minds (or ems) capable of suffering.
  • Entertainment: Many humans enjoy forms of violent entertainment. There are countless historical examples (gladiator fights, public executions and torture, hunting, and much more). While nowadays such entertainment is often fictional (e.g. in video games or movies), some real-world instances still exist (e.g. torture and execution videos, illegal animal fights, hunting). It is conceivable that complex simulations will be used for entertainment purposes in the future, which could cause serious suffering if these simulations contain artificially sentient beings.

Every detail you add to your prediction makes it less likely, but it seems like there are many ways things could wrong. If we think rapid technological change is plausible, then we should be paranoid about causing or tolerating tons of suffering. Humanity’s current behavior suggests we are very far from being sufficiently paranoid about suffering.

As far as I can reason, the extent of animal suffering is the most important injustice of our time, but it isn’t obvious to everyone. If a transformative superintelligent AI is in our near-to-medium future, it seems rather urgent to shift the distribution of opinion on the importance of suffering. It would be ideal if you could influence the opinions of the people most likely to control transformative AI.  There are many plausible options including direct activism, gaining political influence, or doing fundamental research. It is important to keep replaceability in mind when deciding how one can best contribute. It is worth noting many actions are still valuable even if transformative AI is far away.

Our treatment of animals, and to a lesser degree other humans, is extremely troubling evidence. It suggests that low power agents will be mistreated if there are economic or ideological reasons to support their mistreatment. The near-to-medium future might have dramatically more inequality and potential for mistreatment.

It is unclear how much time is left until a transformative AI takes off. But it is worth thinking about how much values can be changed and which institutions are values aligned with a future free of astronomical suffering. I also think it is better to start taking some sort of action now instead of just planning. You can always change your plans later as long as you avoid doing anything counter-productive.

Notes:

  1. In general, I think issues of severe suffering are more important than issues of distributing gains. The gains from AI might be very concentrated – Horses and Chimps did not gain much from the rise of humanity. There is a plausibly high-value political project of ensuring the gains from AI are somewhat evenly distributed. Some organizations like Open AI already support this goal.
  2. Plausibly you can focus on getting AI to learn human values and extrapolate them to something like our ‘Coherent Extrapolated Volition’. Related MIRI paper. In addition to the CEV being constructible, it seems like we need to assume at least one of two things: either the CEV of most subsets of humanity matches the CEV of humanity as a whole, or we have to assume the people controlling AI will altruistically choose to encode humanity’s CEV instead of their own.
  3. There are plausible functional decision-theoretic reasons to selfishly want to be in the coalition that ‘robustly cares about the suffering of less powerful agents’.
  4. Some people rate especially reviled prisoners as less deserving of concern than plants. The distressing evidence is not limited to our treatment of animals.

A 2016 study that asked US participants to rate 30 different entities by the “moral standing” that they deserved found that “villains” (murderers, terrorists, and child molesters) were deemed by participants to deserve less moral standing than “low-sentience animals” (chickens, fish, and bees), with mean scores of 1.77 and 2.64 respectively on a scale from 0 to 9. Indeed, they were deemed to deserve less moral standing than non-sentient “environmental targets” (3.53) or plants (2.52).

Crazy Life Advice

What is some ‘crazy’ life advice that is plausibly a good idea? I am included stuff from ‘real crazy’ to ‘a little offbeat’. Here are some ideas of various spiciness:

Finance:

1) Borrow money -> bet on red. If you lose declare bankruptcy. – This one seems possibly +EV for people with low current net worth.

2) Get married to avoid college debt since it drops expected family contribution to zero – just a 10/10 idea?

3) Wall Street bets as a defense against insider trading claims. – If you suddenly bet a lot on options when you have info you will probably get caught insider trading. But if you have a history of crazy options trading you can say the bet that paid off was just a hunch too.

4) Invest a large amount of your net worth in plausibly AI entangled companies – I shifted most of my investments into this portfolio after gpt-3. Singularity is maybe near. Sorry EMH.

5) Greencard lottery every time even if you don’t plan to move to the USA – The value of a green card is high. Depending on your country and situation you might have a 0.2-2%+ chance to get the green card. If you win the lottery reassess your options.

6) Facilitate moving money to advantaged donors – The benefits of donating to charity vary a huge amount based on who donates the money. You can double your money if you get an employer match. The tax benefits of donating vary quite a bit. You get no benefit as an American if you take the standard deduction. Tax advantages also vary a lot by county. In France tax credits are calculated at 66 percent of the value of the donation, and an individual’s total tax credits for one year may not exceed 20 percent of their taxable income. There are even more crazy ideas where you try to buy ‘lottery tickets’ options and only donate the ones that go up. Here is a google collab where you can play around with the numbers:

Other:

1) Make decisions with double-blind 2nd price auctions – The Beeminder couple does this. Description.

2) Grad School as a way to get 2+ years of slack while you work on a different project – I wish I had done this tbh. Here is Andew Critch’s writeup.

3) Selective Radical Honesty – Being radically honest in general seems like a mistake. Very few people can make it work. But it is practical to practice radical honesty with at least some of your closest relations. By this, I mean that you will freely explain your world models and actual feelings without much of any editing. I will say you should probably hold back some of your mental models. If you share all the ways you evaluate people you risk being Goodharted. I am extremely open and honest with my primary partner and some of my friends.

4) Van / Slackmobile life – If you are skilled you can build a rather livable slackmobile for 10K. Doing it with 20-30K is easier but still affordable for many people. The quality of life in a modified box truck is surprisingly high. Here are some details on building a slackmobile including a full budget.

ht: Noah Kreuter, Brian Liu, Char/Astria, Robert Sharpobject

I will update this list if I find more interesting ideas.

Replaceability Concerns and Possible Responses

How concerned should we be about replaceability? One reason some people don’t seem that concerned is that the leaders of EA organizations reported very high estimates for the value of their new hires. About twenty-five organizations answered the following question:

For a typical recent Senior/Junior hire, how much financial compensation would you need to receive today, to make you indifferent about that person having to stop working for you or anyone for the next 3 years?

The same survey showed that organizations reported feeling more talent constrained than funding constrained.

On a scale of 0 to 4, respondents saw themselves as 2.8 constrained by talent and 1.5 by funding, similar to last year and consistent with the donation trade-off figures.

The 2019 survey replicated the results on talent vs funding constraints. It also had useful information on which skills organizations felt were in demand.

Replaceability

On the other hand, After one year of applying for EA jobs: It is really, really hard to get hired by an EA organization is one of the highest upvoted threads on the EA forum. The applicant had a very strong resume and applied for twenty positions. He was rejected by all of them. He lists all the positions he applied for. The author claims his situation is not unusual:

I know several people who fulfil all of the following criteria:

– They studied/are studying at postgraduate level at a highly competitive university (like Oxford) or in a highly competitive subject (like medical school)

– They are within the top 5% of their course

– They have impressive extracurricular activities (like leading a local EA chapter, having organised successful big events, peer-reviewed publications while studying, …)

– They are very motivated and EA aligned

– They applied for at least 5 positions in the EA community and got rejected in 100% of the cases.

He had gotten encouragement from some well informed EA leaders. Said leaders seemed pretty surprised at how badly his job search went. The comments, in general, are very informative. Several people give full details of their EA job searches. However, I will quote from one comment by the author:

People working at EA organisations, sometimes in senior positions, were surprised when they heard I didn’t get an offer (from another organisation). I’d guess around half the organisations I applied to were “surprised about the very strong field of applicants”. Past messaging about talent constraints probably also plays a role. As a result, career advice in the EA community can be overly optimistic, to a point where more than one person seriously encouraged me to apply for the COO position at OpenPhil (a position which went to the person who led the operations for Hillary Clinton’s election campaign(!)). At least a year ago, when I was talking to dozens of people for career advice, I got the impression that it should be comparatively easy to get hired by an EA organisation.

I assume EA leaderships have become more conservative in the advice they give since the thread was very commonly read. But I don’t think the fundamentals of the EA job market have changed very much.

It is somewhat surprising the EA job market is so competitive. The community is not terribly large. Here is an estimate:

This suggests that there are roughly 2000-3000 highly engaged EAs in total.

Likewise the estimated size of the EA community more broadly, worldwide, is only 5000-10,000 or about the size of a small town or university.

This suggests to me a very large fraction of highly engaged EAs are interested in direct work.

Two Perspectives

One point of view is roughly the following: It is extremely important to get the best person for the job. Organizations highly value their recent hires but seemingly lack the capacity to effectively ramp up hiring. Performance in many positions varies a huge amount. Therefore if there is even a chance you are the best fit for a job you should apply. Organizations are slow to fire bad performers. People are bad at judging their aptitude. If we discourage applicants, there will be very strong applicants too humble to apply. If people build career capital in hopes of getting an EA job they are unlikely to be hurt even if they never do direct work. The expected value of searching over a larger set of applicants is high.

A different point of view is that it looks like a huge percentage of engaged EAs want to do direct work. Engaged EAs as a group are quite talented. Unless you are an unusually good fit for a direct work job that job should, and probably will, go to someone else. There are some direct work jobs that require genuinely rare skills. But if the job seems like it could be done by an average Princeton grad, then it will probably have qualified applicants and you are unlikely to be the best fit. The risk of discouraging people is real but there are real costs to giving people the wrong impression of their prospects. People can feel betrayed and lied to. Engaged EAs might become discouraged or decide they cannot trust the EA information ecosystem. The early years of a person’s career can be very impactful. It is unwise to encourage people to plan for careers that probably won’t work out.

Ideas

Let us imagine the second perspective is correct and think of ideas of what to do. Of course, you can still do direct work if there is a good fit for your specific abilities and experiences. You can also look for career capital you are especially suited to build quickly (for example trying to pivot into AI safety). But other effective altruists may think of the same plan.

One scalable option is clearly earning to give. If you earn to give and donate X dollars per year you basically increase the total amount donated to effective charities by X dollars. The marginal value of EA donations does decrease as more total dollars are donated. But we seem pretty far from exhausting opportunities for effective giving. Overall earning to give does not suffer much from replaceability issues.

Another option is to be willing to make choices other effective altruists seem unwilling to make. For reasons I do not fully understand it seems like few EAs want to try starting an organization even though many (perhaps most) want to work at one. Having more organizations seems useful to me. It would add healthy competition and there is definitely the talent pool. Perhaps potential founders think they cannot get funding? On net trying to found an org, conditional on having a strong team and vision, seems like a plausibly high EV idea. Founding an org probably decreases the odds of other orgs being funded so there are some replaceability concerns.

One could also do EA work for low or potentially no wages. Starting an org with no seed funding would effectively require doing this for some amount of time. Any EA org that is hiring constrained should not be offering very low wages. But perhaps there are ways to produce effective research for no compensation. Doing this with no structure might be an ineffective use of human capital. Organizing volunteers can be challenging but perhaps volunteer-run EA orgs could be created?

You could consider taking on legal risks. Various forms of non-violent direct action might be an effective way to get high-impact political changes. This view seems most common in the animal rights/welfare community. The number of people willing to take legal risks is quite low so replaceability concerns are negligible.

There are some ways to ‘donate’ things besides money. Various organ donation charities have been given some EA funding. Donating a kidney, or a portion of a liver, is not replaceable. Various personal forms of mentorship are probably not replaceable either. However, it’s hard to imagine how one could make anything like a career out of these opportunities. But it would be valuable to keep them in mind.

Engaging Seriously with Short Timelines

It seems like transformative AI might be coming fairly soon. By transformative AI I just mean AI that will rapidly accelerate economic and technological progress. Of course, I am not ruling out a true singularity either. I am assuming such technology can be created using variants of current deep learning techniques.

Paul Christiano has written up arguments for a ‘slow takeoff’ where “There will be a complete 4-year interval in which world output doubles, before the first 1-year interval in which world output doubles.” It is unclear to me whether that is more or less likely than a rapid and surprising singularity. But it certainly seems much easier to prepare for. I don’t think we have a good model of what exactly will happen but we should prepare for as many winnable scenarios as we can.

What should we do now if we think big changes are coming soon? Here are some ideas:

Work on quickly usable AI safety theory: Iterated Amplification and Distillation – Assuming timelines are short we might not have time for provably safe AI. We need AI-safety theory that can be applied quickly to neural nets. Any techniques that can quickly be used to align GPT-style AI are very high value. If you have the ability work on them now.

IDA is a good framework to bet on imo. OpenAI seems to be betting on IDA. Here is an explanation. Here is a lesswrong discussion. If you are mathematically inclined and understand the basics of deep learning now might be a great time to read the IDA papers and see if you can contribute. Other frameworks that can be used in the short term to align real systems would also be valuable.

Get capital while you can – Money is broadly useful and can be quickly converted into other resources in a critical moment. At the very least money can be converted into time. Be frugal, you might need those resources soon.

Besides, the value of human capital might fall. If you have a lucrative position (ex: finance or tech) now is a good time to focus on making money.  The value of human capital might fall. Investing in your human capital by going back to school is a bad idea.

Invest Capital in companies that will benefit from AI technology – Tech stocks are already expensive so great deals will be hard to find. But if things get crazy you want your capital to grow rapidly. I would especially recommend hedging ‘transformative AI’ if you will get rich anyway if nothing crazy happens.

I am doing something like the following portfolio:

ARKQ – 27%
Botz – 9%
Microsoft – 9%
Amazon – 9%
Alphabet – 8% (ARKQ is ~4% alphabet)

Facebook – 7%
Tencent – 6%
Baidu – 6%
Apple – 5%
IBM – 4%

Tesla – 0 (ArkQ is 10% Tesla)
Nvidia – 2% (both Botz and ARKQ hold Nvidia)
Intel – 3%
Salesforce – 2%
Twilio – 1.5%
Alteryx – 1.5%

BOTZ and ARKQ are ETFs. They have pretty high expense ratios. You can replicate them if you want to save 68-75 basis points. Botz is pretty easy to replicate with only ~10K.

Several people think that land will remain valuable in many scenarios. But I don’t see a good way to operationalize a bet on land. Some people have suggested buying options since it is easier to get leverage and the upside is higher. But getting the timing right seems tricky to me.

Physical and Emotional Preparation – You don’t want your body or mind to fail you during the critical period. Invest in keeping yourself as healthy as possible. If you have issues with RSI work on fixing them now so you can give future developments your full attention.

You can also invest in mental preparation. Meditation is high value for many people. A systematic study of rationality techniques could be useful. But keep in mind that it is easy to waste time if you casually approach training. Track your results and have a system!

In general, you want to make these investments now while you still have time. Keep in mind these investments may conflict with attempts to increase your monetary capital. I would prioritize keeping yourself healthy. Make sure you are getting good returns on more speculative investments (and remember many self-improvement plans fail).

Political Organizing and Influence – Technological progress does not intrinsically help people. Current technology can be used for good ends. But they can also be used to control people on a huge scale. One can interpret the rise of humanity as singularity 1.0. By the standards of the previous eras change accelerated a huge amount. ‘Singularity 1.0’ did not go so well for the animals in factory farms. Even if we align AI, we need to make the right choices or singularity 2.0 might not go so well for most inhabitants of the Earth.

In a slow takeoff, human governments are likely to be huge players. As Milton Friedman said, “Only a crisis – actual or perceived – produces real change”. If there is a crisis coming there may be large political changes coming soon. Influencing these changes might be of high value. Politics can be influenced from both the outside and the inside. Given the political situation, I find it unlikely an AI arms race can be averted for too long. But various sorts of intergovernmental cooperation might be possible and increasing the odds of these deals could be high value.

Capabilities Research – This is a sketchy and rather pessimistic idea. But imagine that GPT-3 has already triggered an arms race or at least that GPT-4 will. In this case, it might make sense to help a relatively values-aligned organization win (such as OpenAI as opposed to the CCP). If you are, or could be, very talented at deep learning you might have to grapple with this option.

What ideas do other people have for dealing with short timelines?

Maximum Reasonable Wealth

Assume you endorse basic leftwing principles of equality. How much wealth can you defensibly accumulate? I am explicitly trying to estimate the upper bound of reasonable. I would say a fair line is ‘enough money that you never need to work again’. You can plausibly justify liberating yourself from capitalism. But you cannot justify any more than that.

How much money do you need to safely stop working? I would say the answer is at most 400-500K. The 4% rule says you can safely withdraw 4% of your savings each year and never run out.  The rate was originally computed by assuming:

  • You were invested 50/50 in treasury bonds and the S&P 500
  • You needed to survive 30 years
  • The earliest start date considered is 1925

Four percent was the highest rate that never went broke, even in the worst case. However, this rate is fairly conservative. It assumes you are not going to get any money from social security and or other government programs. I also think it is unreasonable to expect 100% safety. If you have 400K you are objectively much safer than the vast majority of people! If things look bad early you can rebuild your wealth a bit. So I think a 5% rule is probably reasonable.

You can certainly live in most of the United States on 20K. I did it in Pennsylvania for years without any serious effort. It is harder to live in a rich area but being liberated from capitalism does not mean you are liberated to live in San Fransisco. The computations for a committed pair (or trio etc) are more complicated. Obviously, the upper bound of one million holds. But many couples find they save quite a lot by living together. You can compute 20-25 times your combined yearly expenses and see how it compares.

What are the implications of this? If you already have 500K in wealth, unspent income should immediately be donated or given away. If you do not already have 500K, you should definitely still give away money as you make it. Most people who claim they will donate later are just deluding themselves. And even if you really do donate later, amassing resources is always concentrating power. We need equality, not philanthropy. Following these norms can make it hard to own a home in even mid-priced cities. But there is always a leftwing solution. Housing can, and should, be owned collectively. The upper bound should not be used as an excuse to waste money instead of giving it away.

I reiterate this is just an upper bound. If you are comfortable with less of a safety net there is no need to accumulate so much. The advice in this article is only relevant if you have already decided to work a relatively lucrative job. If you are a full-time activist then you are unlikely to face the question of “how much wealth can I justify accumulating?”. But there are many people who identify as leftwing and do have lucrative sources of income.

Computing this sort of number at all can seem like a justification for inequality. But the conclusion has some radical-ish implications. For example, arguably the leading leftist in the United States is Bernie Sanders. Bernie Sanders has a net worth of around 2.5 million dollars. This is objectively not ok and fellow leftists should say so. It is especially unacceptable given that he is almost 80 years old and has access to many safety nets. He really should immediately sell off his property and donate the proceeds. In ‘Why You Should be a Socialist’ Nathan Robinson defends Bernie’s wealth. Nathan Robinson is a leading leftist figure in the United States and our conclusion is that he is insufficiently radical.

Mass Incarceration

The US incarceration rate is about 0.7%. About 2.3 million people are incarcerated. About 4.5 million more people are on probation or parole. The world median rate is around 0.14%.  This means we incarcerate about five times as many people as we would if we had sane policies. I am skeptical you need to incarcerate anywhere near 0.14% of your population. But you certainly don’t need to incarcerate more!

How bad is this? We have, at least, an extra 1.84 million people in jail. They don’t need to be there to maintain a reasonable level of safety. Many, probably most, of them are in conditions bad enough to constitute torture. A key promise, which I accept and you perhaps do not, is that punishment is never inherently good. We are also well past the point where we are deterring crime by incarcerating people. Givewell says the following:

The best estimate of the impact of additional incarceration on crime in the United States today is zero. And, while that estimate is not certain, there is as much reason overall to believe that incarceration increases crime as decreases it.

Trapping people in jail for no good reason is effectively kidnapping. Kidnapping millions of people and torturing a large percentage of them is very bad! Some people would say I am making the ‘worst argument in the world’. But this is my actual feeling. If you forcibly imprison people you need a very good reason or else you are indeed kidnapping them. Maybe you did not intend to kidnap them but you did.

Mass incarceration in the USA is a human rights abuse on an extremely large scale. You should support any remotely reasonable efforts to reduce the incarceration rate. Certainly, you should support the Black Lives Matter protests regardless of whether some property gets damaged. Even if you quibble with the word kidnapping we are ruining millions of lives for no good reason. This has to end!

 

Please Save Us

Last night I prayed for the first time in over a decade. I don’t think God is very likely to exist. But I cried and begged for help. There are so many earthlings suffering terribly. Pigs are being steamed to death or kept in cages their whole lives. Humans are kept in solitary confinement. Animals in the wild are being eaten alive. The possibility of electronic lifeforms means the future might be even worse. Humanity has used its current power to build hell on earth and torture tens of billions of land animals. What will future humans use their greater powers to do?

But even if humanity comes to its senses wild animals will still be suffering massively. Some people think we should destroy most of the biosphere. But our brothers and sisters in the wild are earthlings too. They have the same right to live that we do.

It all feels like a bad LSD trip. Personal identity is ultimately an illusion. The fact that the suffering isn’t happening to ‘me’ doesn’t make it any less real. There is no escape without truly radical change. All we can do is hope and work for a better future. But I don’t know if I can believe that compassion will win in the end. I don’t know how much hope I have left for a good future.

God, if you are out there, please save your children. I don’t know if we can save ourselves. Our nature is to be cruel to one another. Please save us. Please.

 

People hate non-shockers

There is a well-known thought experiment about bad equilibria that goes something like this:

A society has the following rules:

— Everyone must publicly electro-shock themselves 8 hours per day.

— If you see someone not shocking themselves then you must kill them

— Anyone who violates the second rule must also be killed. The same is true for all meta levels. Non-killers of non-killers of … of non-shockers must be killed.

This equilibrium can get locked in even if every single person dislikes it. I think similar but weaker dynamics apply all the time in normal life. Why is there so much hyperbolic hate directed against furries?

Most are people are very repressed. Society has a norm that you need to keep ‘low status’ or ‘cringey’ aspects of yourself private. Furries openly violate these norms. People feel the urge to punish them even if they are not directly disgusted. Being seen as tolerant toward furries puts them at risk. The furries are effectively forcing people to choose between self-preservation and punishing someone else. People hate being forced to make these choices.

I don’t claim this is the only dynamic at play. People also like to look down on official low-status groups to feel better about themselves. But it is an important dynamic.