Replaceability Concerns and Possible Responses

How concerned should we be about replaceability? One reason some people don’t seem that concerned is that the leaders of EA organizations reported very high estimates for the value of their new hires. About twenty-five organizations answered the following question:

For a typical recent Senior/Junior hire, how much financial compensation would you need to receive today, to make you indifferent about that person having to stop working for you or anyone for the next 3 years?

The same survey showed that organizations reported feeling more talent constrained than funding constrained.

On a scale of 0 to 4, respondents saw themselves as 2.8 constrained by talent and 1.5 by funding, similar to last year and consistent with the donation trade-off figures.

The 2019 survey replicated the results on talent vs funding constraints. It also had useful information on which skills organizations felt were in demand.

Replaceability

On the other hand, After one year of applying for EA jobs: It is really, really hard to get hired by an EA organization is one of the highest upvoted threads on the EA forum. The applicant had a very strong resume and applied for twenty positions. He was rejected by all of them. He lists all the positions he applied for. The author claims his situation is not unusual:

I know several people who fulfil all of the following criteria:

– They studied/are studying at postgraduate level at a highly competitive university (like Oxford) or in a highly competitive subject (like medical school)

– They are within the top 5% of their course

– They have impressive extracurricular activities (like leading a local EA chapter, having organised successful big events, peer-reviewed publications while studying, …)

– They are very motivated and EA aligned

– They applied for at least 5 positions in the EA community and got rejected in 100% of the cases.

He had gotten encouragement from some well informed EA leaders. Said leaders seemed pretty surprised at how badly his job search went. The comments, in general, are very informative. Several people give full details of their EA job searches. However, I will quote from one comment by the author:

People working at EA organisations, sometimes in senior positions, were surprised when they heard I didn’t get an offer (from another organisation). I’d guess around half the organisations I applied to were “surprised about the very strong field of applicants”. Past messaging about talent constraints probably also plays a role. As a result, career advice in the EA community can be overly optimistic, to a point where more than one person seriously encouraged me to apply for the COO position at OpenPhil (a position which went to the person who led the operations for Hillary Clinton’s election campaign(!)). At least a year ago, when I was talking to dozens of people for career advice, I got the impression that it should be comparatively easy to get hired by an EA organisation.

I assume EA leaderships have become more conservative in the advice they give since the thread was very commonly read. But I don’t think the fundamentals of the EA job market have changed very much.

It is somewhat surprising the EA job market is so competitive. The community is not terribly large. Here is an estimate:

This suggests that there are roughly 2000-3000 highly engaged EAs in total.

Likewise the estimated size of the EA community more broadly, worldwide, is only 5000-10,000 or about the size of a small town or university.

This suggests to me a very large fraction of highly engaged EAs are interested in direct work.

Two Perspectives

One point of view is roughly the following: It is extremely important to get the best person for the job. Organizations highly value their recent hires but seemingly lack the capacity to effectively ramp up hiring. Performance in many positions varies a huge amount. Therefore if there is even a chance you are the best fit for a job you should apply. Organizations are slow to fire bad performers. People are bad at judging their aptitude. If we discourage applicants, there will be very strong applicants too humble to apply. If people build career capital in hopes of getting an EA job they are unlikely to be hurt even if they never do direct work. The expected value of searching over a larger set of applicants is high.

A different point of view is that it looks like a huge percentage of engaged EAs want to do direct work. Engaged EAs as a group are quite talented. Unless you are an unusually good fit for a direct work job that job should, and probably will, go to someone else. There are some direct work jobs that require genuinely rare skills. But if the job seems like it could be done by an average Princeton grad, then it will probably have qualified applicants and you are unlikely to be the best fit. The risk of discouraging people is real but there are real costs to giving people the wrong impression of their prospects. People can feel betrayed and lied to. Engaged EAs might become discouraged or decide they cannot trust the EA information ecosystem. The early years of a person’s career can be very impactful. It is unwise to encourage people to plan for careers that probably won’t work out.

Ideas

Let us imagine the second perspective is correct and think of ideas of what to do. Of course, you can still do direct work if there is a good fit for your specific abilities and experiences. You can also look for career capital you are especially suited to build quickly (for example trying to pivot into AI safety). But other effective altruists may think of the same plan.

One scalable option is clearly earning to give. If you earn to give and donate X dollars per year you basically increase the total amount donated to effective charities by X dollars. The marginal value of EA donations does decrease as more total dollars are donated. But we seem pretty far from exhausting opportunities for effective giving. Overall earning to give does not suffer much from replaceability issues.

Another option is to be willing to make choices other effective altruists seem unwilling to make. For reasons I do not fully understand it seems like few EAs want to try starting an organization even though many (perhaps most) want to work at one. Having more organizations seems useful to me. It would add healthy competition and there is definitely the talent pool. Perhaps potential founders think they cannot get funding? On net trying to found an org, conditional on having a strong team and vision, seems like a plausibly high EV idea. Founding an org probably decreases the odds of other orgs being funded so there are some replaceability concerns.

One could also do EA work for low or potentially no wages. Starting an org with no seed funding would effectively require doing this for some amount of time. Any EA org that is hiring constrained should not be offering very low wages. But perhaps there are ways to produce effective research for no compensation. Doing this with no structure might be an ineffective use of human capital. Organizing volunteers can be challenging but perhaps volunteer-run EA orgs could be created?

You could consider taking on legal risks. Various forms of non-violent direct action might be an effective way to get high-impact political changes. This view seems most common in the animal rights/welfare community. The number of people willing to take legal risks is quite low so replaceability concerns are negligible.

There are some ways to ‘donate’ things besides money. Various organ donation charities have been given some EA funding. Donating a kidney, or a portion of a liver, is not replaceable. Various personal forms of mentorship are probably not replaceable either. However, it’s hard to imagine how one could make anything like a career out of these opportunities. But it would be valuable to keep them in mind.

Engaging Seriously with Short Timelines

It seems like transformative AI might be coming fairly soon. By transformative AI I just mean AI that will rapidly accelerate economic and technological progress. Of course, I am not ruling out a true singularity either. I am assuming such technology can be created using variants of current deep learning techniques.

Paul Christiano has written up arguments for a ‘slow takeoff’ where “There will be a complete 4-year interval in which world output doubles, before the first 1-year interval in which world output doubles.” It is unclear to me whether that is more or less likely than a rapid and surprising singularity. But it certainly seems much easier to prepare for. I don’t think we have a good model of what exactly will happen but we should prepare for as many winnable scenarios as we can.

What should we do now if we think big changes are coming soon? Here are some ideas:

Work on quickly usable AI safety theory: Iterated Amplification and Distillation – Assuming timelines are short we might not have time for provably safe AI. We need AI-safety theory that can be applied quickly to neural nets. Any techniques that can quickly be used to align GPT-style AI are very high value. If you have the ability work on them now.

IDA is a good framework to bet on imo. OpenAI seems to be betting on IDA. Here is an explanation. Here is a lesswrong discussion. If you are mathematically inclined and understand the basics of deep learning now might be a great time to read the IDA papers and see if you can contribute. Other frameworks that can be used in the short term to align real systems would also be valuable.

Get capital while you can – Money is broadly useful and can be quickly converted into other resources in a critical moment. At the very least money can be converted into time. Be frugal, you might need those resources soon.

Besides, the value of human capital might fall. If you have a lucrative position (ex: finance or tech) now is a good time to focus on making money.  The value of human capital might fall. Investing in your human capital by going back to school is a bad idea.

Invest Capital in companies that will benefit from AI technology – Tech stocks are already expensive so great deals will be hard to find. But if things get crazy you want your capital to grow rapidly. I would especially recommend hedging ‘transformative AI’ if you will get rich anyway if nothing crazy happens.

I am doing something like the following portfolio:

ARKQ – 27%
Botz – 9%
Microsoft – 9%
Amazon – 9%
Alphabet – 8% (ARKQ is ~4% alphabet)

Facebook – 7%
Tencent – 6%
Baidu – 6%
Apple – 5%
IBM – 4%

Tesla – 0 (ArkQ is 10% Tesla)
Nvidia – 2% (both Botz and ARKQ hold Nvidia)
Intel – 3%
Salesforce – 2%
Twilio – 1.5%
Alteryx – 1.5%

BOTZ and ARKQ are ETFs. They have pretty high expense ratios. You can replicate them if you want to save 68-75 basis points. Botz is pretty easy to replicate with only ~10K.

Several people think that land will remain valuable in many scenarios. But I don’t see a good way to operationalize a bet on land. Some people have suggested buying options since it is easier to get leverage and the upside is higher. But getting the timing right seems tricky to me.

Physical and Emotional Preparation – You don’t want your body or mind to fail you during the critical period. Invest in keeping yourself as healthy as possible. If you have issues with RSI work on fixing them now so you can give future developments your full attention.

You can also invest in mental preparation. Meditation is high value for many people. A systematic study of rationality techniques could be useful. But keep in mind that it is easy to waste time if you casually approach training. Track your results and have a system!

In general, you want to make these investments now while you still have time. Keep in mind these investments may conflict with attempts to increase your monetary capital. I would prioritize keeping yourself healthy. Make sure you are getting good returns on more speculative investments (and remember many self-improvement plans fail).

Political Organizing and Influence – Technological progress does not intrinsically help people. Current technology can be used for good ends. But they can also be used to control people on a huge scale. One can interpret the rise of humanity as singularity 1.0. By the standards of the previous eras change accelerated a huge amount. ‘Singularity 1.0’ did not go so well for the animals in factory farms. Even if we align AI, we need to make the right choices or singularity 2.0 might not go so well for most inhabitants of the Earth.

In a slow takeoff, human governments are likely to be huge players. As Milton Friedman said, “Only a crisis – actual or perceived – produces real change”. If there is a crisis coming there may be large political changes coming soon. Influencing these changes might be of high value. Politics can be influenced from both the outside and the inside. Given the political situation, I find it unlikely an AI arms race can be averted for too long. But various sorts of intergovernmental cooperation might be possible and increasing the odds of these deals could be high value.

Capabilities Research – This is a sketchy and rather pessimistic idea. But imagine that GPT-3 has already triggered an arms race or at least that GPT-4 will. In this case, it might make sense to help a relatively values-aligned organization win (such as OpenAI as opposed to the CCP). If you are, or could be, very talented at deep learning you might have to grapple with this option.

What ideas do other people have for dealing with short timelines?

Maximum Reasonable Wealth

Assume you endorse basic leftwing principles of equality. How much wealth can you defensibly accumulate? I am explicitly trying to estimate the upper bound of reasonable. I would say a fair line is ‘enough money that you never need to work again’. You can plausibly justify liberating yourself from capitalism. But you cannot justify any more than that.

How much money do you need to safely stop working? I would say the answer is at most 400-500K. The 4% rule says you can safely withdraw 4% of your savings each year and never run out.  The rate was originally computed by assuming:

  • You were invested 50/50 in treasury bonds and the S&P 500
  • You needed to survive 30 years
  • The earliest start date considered is 1925

Four percent was the highest rate that never went broke, even in the worst case. However, this rate is fairly conservative. It assumes you are not going to get any money from social security and or other government programs. I also think it is unreasonable to expect 100% safety. If you have 400K you are objectively much safer than the vast majority of people! If things look bad early you can rebuild your wealth a bit. So I think a 5% rule is probably reasonable.

You can certainly live in most of the United States on 20K. I did it in Pennsylvania for years without any serious effort. It is harder to live in a rich area but being liberated from capitalism does not mean you are liberated to live in San Fransisco. The computations for a committed pair (or trio etc) are more complicated. Obviously, the upper bound of one million holds. But many couples find they save quite a lot by living together. You can compute 20-25 times your combined yearly expenses and see how it compares.

What are the implications of this? If you already have 500K in wealth, unspent income should immediately be donated or given away. If you do not already have 500K, you should definitely still give away money as you make it. Most people who claim they will donate later are just deluding themselves. And even if you really do donate later, amassing resources is always concentrating power. We need equality, not philanthropy. Following these norms can make it hard to own a home in even mid-priced cities. But there is always a leftwing solution. Housing can, and should, be owned collectively. The upper bound should not be used as an excuse to waste money instead of giving it away.

I reiterate this is just an upper bound. If you are comfortable with less of a safety net there is no need to accumulate so much. The advice in this article is only relevant if you have already decided to work a relatively lucrative job. If you are a full-time activist then you are unlikely to face the question of “how much wealth can I justify accumulating?”. But there are many people who identify as leftwing and do have lucrative sources of income.

Computing this sort of number at all can seem like a justification for inequality. But the conclusion has some radical-ish implications. For example, arguably the leading leftist in the United States is Bernie Sanders. Bernie Sanders has a net worth of around 2.5 million dollars. This is objectively not ok and fellow leftists should say so. It is especially unacceptable given that he is almost 80 years old and has access to many safety nets. He really should immediately sell off his property and donate the proceeds. In ‘Why You Should be a Socialist’ Nathan Robinson defends Bernie’s wealth. Nathan Robinson is a leading leftist figure in the United States and our conclusion is that he is insufficiently radical.

Mass Incarceration

The US incarceration rate is about 0.7%. About 2.3 million people are incarcerated. About 4.5 million more people are on probation or parole. The world median rate is around 0.14%.  This means we incarcerate about five times as many people as we would if we had sane policies. I am skeptical you need to incarcerate anywhere near 0.14% of your population. But you certainly don’t need to incarcerate more!

How bad is this? We have, at least, an extra 1.84 million people in jail. They don’t need to be there to maintain a reasonable level of safety. Many, probably most, of them are in conditions bad enough to constitute torture. A key promise, which I accept and you perhaps do not, is that punishment is never inherently good. We are also well past the point where we are deterring crime by incarcerating people. Givewell says the following:

The best estimate of the impact of additional incarceration on crime in the United States today is zero. And, while that estimate is not certain, there is as much reason overall to believe that incarceration increases crime as decreases it.

Trapping people in jail for no good reason is effectively kidnapping. Kidnapping millions of people and torturing a large percentage of them is very bad! Some people would say I am making the ‘worst argument in the world’. But this is my actual feeling. If you forcibly imprison people you need a very good reason or else you are indeed kidnapping them. Maybe you did not intend to kidnap them but you did.

Mass incarceration in the USA is a human rights abuse on an extremely large scale. You should support any remotely reasonable efforts to reduce the incarceration rate. Certainly, you should support the Black Lives Matter protests regardless of whether some property gets damaged. Even if you quibble with the word kidnapping we are ruining millions of lives for no good reason. This has to end!

 

Please Save Us

Last night I prayed for the first time in over a decade. I don’t think God is very likely to exist. But I cried and begged for help. There are so many earthlings suffering terribly. Pigs are being steamed to death or kept in cages their whole lives. Humans are kept in solitary confinement. Animals in the wild are being eaten alive. The possibility of electronic lifeforms means the future might be even worse. Humanity has used its current power to build hell on earth and torture tens of billions of land animals. What will future humans use their greater powers to do?

But even if humanity comes to its senses wild animals will still be suffering massively. Some people think we should destroy most of the biosphere. But our brothers and sisters in the wild are earthlings too. They have the same right to live that we do.

It all feels like a bad LSD trip. Personal identity is ultimately an illusion. The fact that the suffering isn’t happening to ‘me’ doesn’t make it any less real. There is no escape without truly radical change. All we can do is hope and work for a better future. But I don’t know if I can believe that compassion will win in the end. I don’t know how much hope I have left for a good future.

God, if you are out there, please save your children. I don’t know if we can save ourselves. Our nature is to be cruel to one another. Please save us. Please.

 

People hate non-shockers

There is a well-known thought experiment about bad equilibria that goes something like this:

A society has the following rules:

— Everyone must publicly electro-shock themselves 8 hours per day.

— If you see someone not shocking themselves then you must kill them

— Anyone who violates the second rule must also be killed. The same is true for all meta levels. Non-killers of non-killers of … of non-shockers must be killed.

This equilibrium can get locked in even if every single person dislikes it. I think similar but weaker dynamics apply all the time in normal life. Why is there so much hyperbolic hate directed against furries?

Most are people are very repressed. Society has a norm that you need to keep ‘low status’ or ‘cringey’ aspects of yourself private. Furries openly violate these norms. People feel the urge to punish them even if they are not directly disgusted. Being seen as tolerant toward furries puts them at risk. The furries are effectively forcing people to choose between self-preservation and punishing someone else. People hate being forced to make these choices.

I don’t claim this is the only dynamic at play. People also like to look down on official low-status groups to feel better about themselves. But it is an important dynamic.

Non-Duality

There was once a poetry contest to determine the next Zen Patriarch. Yeno was the chief monk of the Patriarch’s monastery and the heir apparent. He wrote this:

The body is a Bodhi-tree
The soul a shining mirror:
Polish it with study
Or dust will dull the image.

No one openly challenged. Huineng was a low ranked monk who worked in the kitchen. At night he wrote his poem below the chief monk’s.

Bodhi is not a tree;
There is no shining mirror.
Since All begins with Nothing
Where can dust collect?

The current Patriarch chose Huineng as his successor, and Yeno had to flee for his safety. This split Zen into two schools. Huineng’s Southern School taught that enlightenment was sudden. Yeno’s Northern School taught that enlightenment was gradual. Eventually, the schools merged but Huineng’s influence was great. Sudden enlightenment became the standard doctrine of Zen.

The Zen/Chan tradition teaches that we are fundamentally already enlightened.

The Chan tradition does not usually refer to steps or stages. Its central teaching is that we are intrinsically awake; our mind is originally without abiding, fixations, and vexations, and its nature is without divisions and stages. This is the basis of the Chan view of sudden enlightenment. If our mind’s nature were not already free, that would imply we could become enlightened only after we practiced, which is not so. If it’s possible to gain enlightenment, then it’s possible to lose it as well.

The question that immediately comes to mind is, “If we are already enlightened, why do we need to meditate?” There are many traditional answers. The above article offers this response.

In the Chan tradition, therefore, practice is not about producing enlightenment. You might wonder, “Then what am I doing here, practic­ing?” Because practice does help clean up the “furniture” in the “room.” By not attaching to your thoughts, you remove the furniture, so to speak. And once your mind is clean, instead of fixating on the chairs, tables, and so on, you see its spaciousness. Then you can let the furniture be or rearrange it any way you want—not for yourself, but for the benefit of others in the room.

I think it is often useful to look at practical cases. People are commonly advised to “submit to the experience” when they trip. “Be yourself” is such common advice it is a cliche. How can this sort of advice work? Isn’t the feeling of “freaking out and trying to make it stop” a part of the experience of a bad trip? You are already yourself. If you want different results something has to change about your approach. Despite these objections, I think both pieces of advice are quite good. But there is a koan-like quality to any advice that promises positive change through acceptance.

I have a history of struggling with loss of control. I often lucid dream. When I realize I am dreaming I often feel an extremely strong desire to force myself to wake up. This tends to make the experience negative. When I can relax and not try to “prove” I can end the dream I can have a good time.

This all came to mind after a recent experience I had meditating while peaking on shrooms. Meditating made the experience dramatically stronger. Immediately after I started I reached a pretty deep state. I felt like I was standing on the wing of a giant bird flying through space. Things were clearly about to take off and get even crazier. Before things got too crazy I opened my eyes. I had lost an opportunity. But I realized that if I am going to practice acceptance I need to have compassion for my own flaws. I closed my eyes again and had a less crazy but quite good experience.

Last night I lucid dreamed. This time I did not try to wake up.

What does ‘Actually Trying’ look like?

Earlier today I talked to someone about why so few rationalists are continuing the work of developing rationality. I said that there actually seemed to be many rationalists working on mental technology. The problem is that he did not seem to be a fan of what people are actually working on. He would prefer they work on stuff closer to the CFAR handbook. There are some people working on things like that. For example my friend Shrikant and his group.

However a lot of rationalists who are ‘actually trying’ have gotten into somewhat ‘weirder’ things. Here are some of the topics:

It is perhaps not obvious why ‘actually trying’ would correlate with openness to unusual ideas, even among rationalists. But perhaps it is not surprising at all. Eliezer had a lot of weird ideas. People who actually take up his mantle are not going to be afraid of getting into weird stuff. People who just like the community are more likely to stick to the basic rationalist stuff. Personally, I love the people who are carrying on the mantle of the sequences instead of trying to seem normal.

Dzogchen

…there is an ending [to Infinite Jest] as far as I’m concerned. Certain kind of parallel lines are supposed to start converging in such a way that an “end” can be projected by the reader somewhere beyond the right frame. If no such convergence or projection occured to you, then the book’s failed for you.

— David Foster Wallace

You can read Aaron Scwartz explain the ending to Infinite Jest here. It is really a wild ride.

Raymond Arnold has argued that honest disagreements often take years to resolve, even if all participants are acting in good faith. Of course, Ray is descriptively right. Even when reasonable people eventually change their minds, it can take years. But I think we can often do better. It is easy to make mistakes with conservation of expected evidence, but the expected value of the change in your beliefs should be zero.

Often you can feel how the story is going to end. Perhaps you accept a certain argument and one of your true cruxes falls. Sometimes the way this manifests is that you feel an aversion to considering certain arguments. The reason for this aversion is that you don’t actually have a good response. All theories have anomalies. But clear your mind and imagine thinking about the issue for a long time. Or imagine looking back in twenty years. Wherever you feel you will end up in the future is where you should be now.

Dzogchen is a Buddhist practice most associated with Tibet. It has been described as “complete enlightenment in one instant”. Once you feel the parallel lines converging there is no need to draw things out. You can let go. It is genuinely inspiring to interact with people who can change their mind without delay. Become enlightened instantly!