People Will Listen

I have been thinking a lot about the crypto autopsy Scott posted in 2018. In retrospect, there was still an enormous amount of money to be made ‘buying the dip’ in BTC/ETH. And there was even more money to be made buying altcoins. Scott also links to this thread from 2015 strongly advising people to buy bitcoin at around $230 (so approximately 250x gains on the percent you held). The earlier bitcoin discussion on lesswrong might have represented an even more lucrative opportunity but this is some of the best completely explicit advice ever posted on the forum:

LessWrong is where I learned about Bitcoin, several years ago, and my greatest regret is that I did not investigate it more as soon as possible, that people here did not yell at me louder that it was important, and to go take a look at it.  In that spirit, I will do so now.

This is a time to be good rationalists, and investigate a possible opportunity, comparing the present situation to historical examples, and making an informed decision.   Either Bitcoin has begun the process of dying, and this decline will continue in stages until it hits zero (or some incredibly low value that is essentially the same for our purposes), or it will live.  Based on the new all time high being hit in number of transactions, and ways to spend Bitcoin, I think there is at least a reasonable chance it will live.  Enough of a chance that it is worth taking some money that you can 100% afford to lose, and making a bet.  A rational gamble that there is a decent probability that it will survive, at a time when a large number of others are betting that it will fail.

And then once you do that, try your hardest to mentally write it off as a complete loss, like you had blown the money on a vacation or a consumer good, and now it is gone, and then wait a long time.

As I am writing the thread itself has four upvotes. Conversely, the following comment has twenty-six (this thread long predates variable weight votes though we can still vote on it going forward)

I used to believe that bitcoin is under-priced before, but there are so many agents involved in it now (including Wall Street), that I can’t really convince myself that I know better than them – the market is too efficient for me.

Additionally, I’d be especially wary about buying based on arguments regarding the future price based on such obvious metrics, that many agents pay attention to.

This seems like a really strong indictment of the community’s collective rationality. On the other hand, I have been posting some financial advice threads on lesswrong. I have posted much more advice on rationalist adjacent facebook and discord. People listen. I frequently get messages from people telling me they made money thanks to my posts. Several friends of mine got into Solana around $2.50 at the same time and have made six or seven figures from that investment. A few people got in later or for smaller amounts and still made meaningful amounts of money (Solana is no longer a truly amazing investment but it’s still worth buying/holding). Villiam’s comment is important to keep in mind:

Some of us were smarter than others. Good for them! But if we want to help each other, and avoid having the same thing happen the next time, next time when you see an exceptionally important article, don’t just think “others have read the same article, and they are smart people, so they know what to do”. That’s another form of illusion of transparency; after reading the same text, some people will jump up, others will just continue reading. Here are two things you can do to nudge your fellow rationalists in the right direction:

1) Imagine a person who has very little knowledge in this specific area, and for some reason is not going to study more. Can the whole thing be simplified; ideally into a short list that is easy to follow? For example: “Step 1: register online at BitStamp. Step 2: send them the required KYC documents. Step 3: do the money transfer. Step 4: buy Bitcoins. Step 5: HODL!” More people will follow this procedure, than if they just read “buy and/or mine some Bitcoins, find out how”.

2) Offer help at your local meetup. Make a short lecture, explain the details, answer questions. When people are interested, guide them step by step

It is very hard to grok how context people are missing. You need to make it as easy as possible for people to follow your advice. And you need to tell them a real plan. Many of us held a lot of bitcoin at one point or another. Just buying bitcoin was not enough, you needed to stick to a plan for selling. I personally like ‘sell 20% at every doubling’ for volatile investments. Give people the tools to succeed. A good friend of mine wanted to get into Solana for a few thousand dollars but found it difficult to buy. He is out tens of thousands of dollars because I did not make it easier for him to buy. He really could use that money. 

Am I now become your enemy, because I tell you the truth? – Galatians 4:16

The problem is that the more explicit you are the more pushback you should expect to receive. If you just explore a topic and ‘hint hint’ at your advice you won’t expose yourself to the same types of criticisms. You don’t need much plausible deniability to shield yourself. However, you cannot shield yourself very much if you are giving step-by-step instructions and making clear claims. We should take trivial inconveniences very seriously. Therefore we must not add any unnecessary friction. 

My advice is to accept that ‘haters are gonna hate’ and just take the hit. Make your arguments as clear and your advice as easy to follow as possible. But understand that no matter what you do, if you tell people to buy bitcoin at $230, the top comment might be critical. Some people will listen and benefit.

As a group, we should really be taking Aumannian reasoning and noticing confusion more seriously. If something very interesting is going on we need to stop and think and come to a logical conclusion. Obviously, not everyone is going to agree. But at least smaller groups should be able to ‘Stop, Drop and Think’. I hope we can do better as a group but if we cannot, or you leave lesswrong, at least Stop, Drop and Think when you notice a potential opportunity. The rewards for making the right choices can be enormous. 

This post mostly argues in favor of sharing your well-thought-out positions. But another implication of ‘people will listen’ is that you should not give advice flippantly. Don’t quickly fire off conventional wisdom. If you have not thought through your counsel make sure you make this clear. Of course, if you have thought things out I strongly suggest speaking your mind.

There is a season for all things and this seems like a season for thinking about crypto. However, I think pretty much every season is for thinking about money. Money is instrumentally useful for the vast majority of goals. If you want to ‘win’ you should probably think at least a little about money. But I also think these principles extend across domains and are not limited to financial decisions.

The EMH is False – Specific Strong Evidence

I am going to defend the following response to “If you are so smart why aren’t you rich?”: Rationalists actually are smart but we were way too modest and did not bet on our beliefs. The rationalists who actually tried to use rationality to invest often traded extremely lucratively. We should stop being so modest moving forward. Ideas have consequences, including for asset prices. In my opinion, anyone who has been on lesswrong for a long time really should be at least financially independent

I will first present the best evidence I have that the EMH is quite false: There are currently high return trades (5% a month at least, possibly more) with extremely low risk (you can lose 1-2% max, probably less depending on execution). These trades take a little execution but do not require professionals. In the recent past, there were VERY simple bets that returned ~10% a month with even less risk. I will describe both these trades then talk about more speculative evidence.

By the EMH I mean this practical form: People cannot systematically outperform simple strategies like holding VTSAX. Certainly, you cannot expect to have a higher expected value than max(VTSAX, SPY). Opportunities to make money by active investing are either very rare, low volume, or require large amounts of work. Therefore people who are not investing professionally should just buy broad-based index funds. 

I would say that for many asset classes you should have a reasonably strong prior that the current price is correct. I would include stocks and normal sports bets. However this prior is weak enough that the standard to overcome it is basically ‘convincing argument from a friend’. It is important to approach this with the same mindset you would use to make predictions and to be reasonably detail-oriented. I am not claiming this is trivially easy to beat the market just very doable.

Post Election Trump Betting

You could lucratively bet against Donald Trump long after the election on various platforms. The most lucrative way known to me was to short ‘TRUMPFEB’ on ftx.com. TRUMPFEB was a token that would pay out 1$ if Trump was president on Feb 1st and 0$ if he was not. The tokens were tradeable. Importantly you could short them to bet against trump. If TRUMPFEB was selling for 8cents and you short TRUMPFEB you would essentially be betting 1 dollar to make 8cents. Here is a graph of the price over time and some specific values.

Source images: https://www.coingecko.com/en/derivatives/ftx/TRUMPFEB

Roughly the markets thought Trump has a 15-17% chance until Nov 22nd, ~10-11% chance until Dec 10th and ~5-6% chance until January 5th. The odds were non-trivial until the electoral college met in person. You could short TRUMPFEB with 2x leverage which let you double your returns.  You could have easily placed a million-dollar bet shorting TRUMPFEB long after the election. FTX was not available to Americans but you could have gotten somewhat less lucrative odds betting on Polymarket or catnip. Polymarket and Catnip were available to Americans. At least one rationalist I know made over 100k betting on Polymarket.  The Polymarket election market had nine figures of volume. Predictit had low limits and high fees but many other platforms did not.

Safe high return trades exist right now – Perpetual Future Arbitrage

This trade is harder to explain and trickier to execute but it does not require being a professional. A ‘perpetual future’ is a contract that mimics an underlying asset. You can buy them ‘going long’ or you can ‘sell’ them to other people by shorting. For simplicity, let’s talk about BTC and BTC-PERP. BTC-PERP is a tradeable asset that works as follows on ftx.com:
— Ever hour compute the average prices of BTC and BTC-PERP over the last hour

— If BTC-PERP traded higher then longs pay shorts. If BTC is higher than shorts pay longs.  

— The amount paid is 1/24th of the discrepancy (on Binance these payments trigger every 8hours and the payment is 1/3 the difference).

Here is what this looks like. The second to last is the ‘payment’ (negative means I got paid) and the last column is the hourly rate. (These are some example holdings, this is a real account but not a proposed portfolio):

Open in a new tab to actually read the image

The above account is short all the coins. 

In general Perpetuals trade above the price of the underlying coins. The underlying reason is that it is much easier to use leverage when buying perpetual futures. This effect is especially strong if the market is bullish (as it is right now). 

The way to make this into an arbitrage is to buy the underlying coin and short an equal amount of the perpetual. For example, buying Bitcoin and shorting BTC-PERP. Prices almost never differ by more than 0.2%. 

The account above has a value of around 32k. If you add up the payments you will notice I received more than five dollars in one hour. That is 840 USD or 2.65% a week. This payment is a little better than I would expect going forward but if the market remains bullish you can get 10% monthly doing these trades. I would be surprised if you could not get at least 5% returns over the next 30 days. Returns were even better a few months ago. However, rates were sane until about three months and will presumably return to sanity one day. But for now, you can grab some safe returns if you do this trade skillfully. This is not as simple as ‘bet against trump’ and you probably need some python scripts to grab funding rate data. You also need to be careful about how you enter and exit positions. There is some work involved here but it easily makes sense if you have decent amounts of capital to invest (or can somehow pool some capital). 

However the ‘work involved’ does not explain why this has not been arbitraged away as predicted by the EMH. Many large players are actually doing these trades. But at least for now, the ‘smart money’ cannot close the gaps. Markets are often inefficient for a long time. 

Backtesting Rationalist Investing

Hopefully, the previous examples have garnered me some credibility because I am about to do something very dangerous. The perp arbitrage is current and I told people to bet on the elections well ahead of resolution (I told people to bet post-election on various discords). But I think one very obvious implication of ‘rationalist thinking’ was to bet on AI progress. It was especially clear you should do this after Alphago. Conveniently Alphago came out a little over five years ago so we can check the five-year returns of some plausible AI investments:

Goog 3x 

Nvidia 15x 

AMD 30x 

Intel 2x

Tesla 14x

Microsoft 4.5x

Botz ETF (2.5x)

TSM 4x

Facebook 2.5x

QQQ (Tech sector ETF) 3x

ARKQ (ETF) 4x

SPY (Obvious control group) 2x

You can argue that returns are driven by ‘tech stocks did well’ but GPU stocks did much better than tech in general. And I do think its fair to assume a rational investor who was looking to bet on AI progress would have put some of their portfolio into GPU manufacturers once it became clear that the ‘more compute’ paradigm was going to be influential. Imo Alphago was a good time to be convinced of the ‘more layers’ thesis and at the time most big AI projects were trained on Nvidia GPUS. 

Much has also been made of the fact that Bitcoin was mentioned very early on lesswrong. 

I changed my mind, Now I’m feeling different

We were young
We were young
We were young, we didn’t care
Is it gone?
Is it gone?
Is it floating in the air?
I changed my mind
I changed my mind
Now I’m feeling different

All that time, wasted
I wish I was a little more delicate

I think it us usually best to focus on the strongest arguments for your case so I won’t go into various weaker ones in any depth. But I will note it is possible to lend quite safely at high rates (~20% long term, often much higher APY short term). The ‘Equity Premium Puzzle’ is also a well known anomaly. It is very hard to explain why treasury bill returns have been so low relative to stocks for over a hundred years. I mention these as examples but I ask people to stick to object level counterarguments against my main points. I have been telling people how to beat the market for months. I am telling you how right now. So it doesn’t really make sense to make meta-arguments about how the things I am saying are impossible. The object level have been laid out. 

I too once believed in the EMH but I changed my mind. The third virtue is lightness. In a few days I hope to follow this post up with a sketch of what is still available, practical advice on amateur trading and various paths forward. 

My own mistakes

I should have written about this much earlier. I did tell people to Bet on Biden pre-election but that post had many flaws such as emphasizing predictit instead of Polymarket. In addition, post-election betting was much more obviously lucrative than pre-election betting. The last few months have also been extremely lucrative because of the crypto boom. I talked about many opportunities on various discord but did not post anything systematic and did not post ‘Bet on Biden v2’ to lesswrong nor did I post about crypto. Regardless of the reception, this thread gets I should have posted the information sooner. It is notable many people thanked me for the ‘Bet on Biden’ piece and it really hurts me that I did not sound the alarm louder on later opportunities. I also believe I cost myself a large amount of money by locking myself out of good counsel. I can make various excuses but I will not do so. Maybe my living and family situations played a factor in my bad judgment. Regardless I strongly regret my behavior 

Inclusion and Effective Altruism

Most people involved in effective altruism want to be inclusive. However, some people think the community can feel exclusive to people who are ‘average’. It seems like a decent number of people feel discouraged when they try to move from medium to high engagement with Effective Altruism. Why do people feel this way? To be clear I mean ‘inclusive’ in a broad sense: Can people who want to become involved in the community do so in meaningful ways? Will they feel included?

A very large percentage of EA funding comes from one couple. Estimates suggest that Dustin Moskovitz and Cari Tuna plausibly account for over half that over half of total EA funds. Wealth is quite concentrated in most countries. So it seems more tractable to try to influence large donors even though the average American household donated a few thousand dollars last year. I endorse the common advice to focus on absolute impact. But even a small probability of influencing a large donor is worth more expected dollars than most people can realistically donate. One promising initiative is Founder’s Pledge but there is still a limit to how many people can justify founding a startup.

The EA community is clearly trying to expand its influence in academia. This is probably a sensible strategy. This analysis suggests it worked well for the ‘neoliberal’ movement. However, it is not the easiest project to contribute to. Getting any sort of academic position is very difficult. And academic prestige is quite hierarchical; top schools are much more influential (though there are some interesting outliers like George Mason).  

The EA community also tends to promote a very conservative approach. For example, EAs have been warned not to rush to translate material. It does seem reasonable to be worried about lock-in and EA concepts can be hard to translate. Lock-in risks have also been raised with respect to founding EA groups. Though it seems like forming student groups is still heavily encouraged. If you engage a lot with effective altruist literature you will absorb a message that it is important to get many things right the first time. 

It is worth explicitly asking why the community even wants to be inclusive. Jane Street Capital is not very inclusive. The vast majority of people who want to work there cannot. Some people I have spoken to feel that EA is trying to solve hard problems and those problems are basically unrelated to inclusion. If you accept fewer warm fuzzies you can get more done. In their view, it is basically a category error to even consider inclusion except insofar as inclusion helps solve the hard problems. How can we reply to them on their own terms? 

I think there are a few instrumental reasons to focus on feeling inclusive. It is genuinely difficult to know who is going to be in a position to help or put in an important good word. We don’t know which perspectives will turn out to be influential. Exclusion precludes serendipity.  Weak ties are less likely to help if the community is perceived as discouraging. It is very bad for the movement if the wrong people drop out because they or their friends feel unable to contribute. We don’t want to reduce our exposure to tail windfalls.

I think there are structural problems and no easy answers. But if the community is going to emphasize strategies that are hard to contribute to then we should consider ways to mitigate the downside. Are there straightforward and low-risk ways to get involved that we should promote more. Should we re-consider some decisions like de-emphasizing earning to give and reducing the size of EA global? Donating money is relatively accessible compared to alternatives. I am sure there were sensible reasons to reduce the size of EAG but many people I know were hurt when they could not attend. I am not suggesting to prioritize inclusion over everything else, but I do think we should weight it more heavily when making tradeoffs and considering funding decisions.

Compassion for the Maga Protestors

As someone quite far on the left I have to say:

I do not support police brutality against disruptive protestors. I do not support handing out prison sentences to disruptive protestors. The carceral system is incredibly harsh in the United States. We need to be consistent in our opposition to American style criminal injustice.

We do not know the details of everything that went on and some of the protestors may have acted very badly. But the majority of the protestors should have been allowed to safely go home. The police were initially restrained but became much more violent at night. One woman was shot by the police and based on the video her death was a tragedy. Dozens of people were arrested. I hope they get off relatively easily. Even people who commit violence or murder do not deserve the severity of the US carceral system.

For the most part, the protestors were not trying to install themselves as part of a dictatorial regime. I assume the vast majority of them were normal people who thought the election was stolen. I do not think there is much evidence for this claim but people they trusted repeatedly told them it was true. It was even more delusional to think this disruption would lead to anything positive from their point of view. The protests have already led to thedonlad.win being put at risk. Further marginalization of their movement is coming.

The protestors were not really an organized fascist group. But even if they were literal Nazis I would not glorify the police and carceral system. The balance of power can change and situations like the Indonessian Mass Killings should make us very wary. I can also understand why the victims of police brutality might feel vindicated to see the police turn on the blue lives matter crew. The protestors were fellow humans who were sadly misled. They, like all people who broken or arguably broken laws, deserve compassion and understanding. Even if people deserve punishment they don’t deserve the severity of the US system.

The USA has had the top incarceration rate in the world for over a decade. The police constantly break laws and use excessive force. I see many people calling for the protestors to be prosecuted to the full extent of the law. In America that often means destroying their lives. I hope the protestors come to their senses and are able to return to their lives. We really need to keep our priorities straight and maintain a compassionate mindset.

True Masters

I want both the Hindus and Mussalmans to cultivate the cool courage to die without killing. But if one has not that courage, I want him to cultivate the art of killing and being killed rather than, in a cowardly manner, flee from danger. For the latter, in spite of his flight, does commit mental himsa. He flees because he has not the courage to be killed in the act of killing.

My method of nonviolence can never lead to loss of strength, but it alone will make it possible, if the nation wills it, to offer disciplined and concerted violence in time of danger.

My creed of nonviolence is an extremely active force. It has no room for cowardice or even weakness. There is hope for a violent man to be some day non-violent, but there is none for a coward. I have, therefore, said more than once….that, if we do not know how to defend ourselves, our women and our places of worship by the force of suffering, i.e., nonviolence, we must, if we are men, be at least able to defend all these by fighting.

Between Cowardice and Violence by Ghandi

Many people are surprised that Gandhi would explicitly say that violence is preferable to cowardice. Gandhi is often cited by those denouncing ‘violent’ opposition to injustice. But Gandhi did not have some strict deontological opposition to violence. Gandhi believed that non-violence was the best way to win the struggle against injustice. And that the society that results from non-violent revolution would be superior to the result of violent revolution.

In spite, however, of the madness and the vain imitation of the tinsel of the West, the hope lingers in me and many others that India shall survive this death dance and occupy the moral height that should belong to her after the training, however imperfect, in nonviolence for an unbroken period of thirty-two years since 1915.

An India reduced in size but purged in spirit may still be the nursery of the nonviolence of the brave and take up the moral leadership of the world, bringing a message of hope and deliverance to the oppressed and exploited races. But an unwieldy, soul-less India will merely be an imitation, and a third-rate imitation at that, of the Western military States, utterly powerless to stand up against their onslaught. I have no desire to outlive the India of my dreams.

India and the Violent Way by Ghandi

Never the less, Gandhi’s conception of non-violence was quite expansive. He believed in radical criminal justice reform. It is much harder to support a principle if you don’t truly think it works. Gandi believed in an India free of both colonialism and internal oppression. He thought non-violence was the best way to achieve his dream.

Polite Order

Anyone else concerned that US politics has suddenly collapsed from civil arguments over redistribution amounts & mechanisms (left vs right) to street violence between rival gangs?
Polite order is my ingroup. Civil unrest is my outgroup. This rapid transformation and the loss of shared anti-violence culture horrifies me. The threat to the lives of the urban underprivileged from a collapse of civil order is VASTLY greater than the minuscule number of unjust murders by police. These rioting fools are ignoring the history of failed state body counts.
— Patri Friedman, Facebook
Patri Friedman is the founder and current board chairman of the Seasteading Institute. He is the grandson of the famous Libertarian economist Milton Friedman and son of the ancap economist David Friedman. He has gotten substantial funding from Peter Thiel. He is a pretty influential person in rationalist-y libertarian circles. And many people have said similar things. The point of view is hard for me to understand.

It seems to me that consistent supporters of freedom should be ecstatic about Black Lives Matter. The USA incarceration rate is truly ridiculous. We incarcerate at rates 6.3 times higher than the EU. The white incarceration rate is four times and the Black rate is twenty-two the EU average.  About a third of black men have a felony conviction. This represented a truly enormous loss of human freedom.

A serious obstacle to change is that the police were considered rather trustworthy and honest by the public. In 20019 54 percent of people said the police had high or very high honesty and integrity. Of course, police lie all the time. Police also take more property than robbers. In many circles, the famous Don’t Talk the Police video is considered common-sense advice. We know the police cannot be trusted, it would be better if the wider public knew. BLM is starting to change the public perception of the police. We cannot protect our rights until the public becomes much more skeptical of the police. Even if you are not concerned about racism you should enthusiastically support BLM if you truly prioritize freedom.

You can only serve so many masters, perhaps just one. In theory, Patri is in favor of human freedom. But it seems to me his true master is ‘polite order’. It is clear how Patri will side if freedom trades off against order and property rights. You cannot model people based on what they abstractly support. You need to know which masters they serve.

 

 

 

Animal Rights and AI Risk

A few weeks ago I was protesting outside a pig slaughterhouse. I thought about GPT-3 a bit. I am afraid of a bad singularity, but the bad singularity has already come for the pigs with the explosion of human intelligence, and as a result, everyone they love is definitely going to die horribly. The idea that technology might lead to hell on earth is not a theoretical problem. It’s happened before. I wonder how well a future singularity is going to go for people like me.

Many people are rightly concerned about technical AI alignment. Successful technical alignment would mean that each AI system is permanently aligned with the interest of some subset of humans. Is this sufficient to be confident of a good future? There are still serious risks even if the dominant coalition of AIs is aligned with something like the collective opinion of humanity (this would include an aligned singleton).

Here are some reasons why the future might be full of astronomical suffering:

  • Economic productivity: Suffering might be instrumental in achieving high economic output. Animal suffering in factory farms is a case in point: it just so happens that the most economically efficient way to satisfy the demand for cheap meat involves a lot of suffering. This is not currently an s-risk because it’s not astronomical in scope, but it’s possible that future technology will enable similar structures on a much larger scale. For instance, the fact that evolution uses pain suggests that learning might be more efficient if negative reward signals are also used, and we might consider sufficiently advanced and complex reinforcement learners to be capable of suffering.
  • Information gain: Experiments on humans or other sentient creatures might be useful for scientific purposes (like animal testing), while causing harm to those experimented on. Again, future technology may enable such practices on a much larger scale as it may become possible to run a large number of simulations of artificial minds (or ems) capable of suffering.
  • Entertainment: Many humans enjoy forms of violent entertainment. There are countless historical examples (gladiator fights, public executions and torture, hunting, and much more). While nowadays such entertainment is often fictional (e.g. in video games or movies), some real-world instances still exist (e.g. torture and execution videos, illegal animal fights, hunting). It is conceivable that complex simulations will be used for entertainment purposes in the future, which could cause serious suffering if these simulations contain artificially sentient beings.

Every detail you add to your prediction makes it less likely, but it seems like there are many ways things could wrong. If we think rapid technological change is plausible, then we should be paranoid about causing or tolerating tons of suffering. Humanity’s current behavior suggests we are very far from being sufficiently paranoid about suffering.

As far as I can reason, the extent of animal suffering is the most important injustice of our time, but it isn’t obvious to everyone. If a transformative superintelligent AI is in our near-to-medium future, it seems rather urgent to shift the distribution of opinion on the importance of suffering. It would be ideal if you could influence the opinions of the people most likely to control transformative AI.  There are many plausible options including direct activism, gaining political influence, or doing fundamental research. It is important to keep replaceability in mind when deciding how one can best contribute. It is worth noting many actions are still valuable even if transformative AI is far away.

Our treatment of animals, and to a lesser degree other humans, is extremely troubling evidence. It suggests that low power agents will be mistreated if there are economic or ideological reasons to support their mistreatment. The near-to-medium future might have dramatically more inequality and potential for mistreatment.

It is unclear how much time is left until a transformative AI takes off. But it is worth thinking about how much values can be changed and which institutions are values aligned with a future free of astronomical suffering. I also think it is better to start taking some sort of action now instead of just planning. You can always change your plans later as long as you avoid doing anything counter-productive.

Notes:

  1. In general, I think issues of severe suffering are more important than issues of distributing gains. The gains from AI might be very concentrated – Horses and Chimps did not gain much from the rise of humanity. There is a plausibly high-value political project of ensuring the gains from AI are somewhat evenly distributed. Some organizations like Open AI already support this goal.
  2. Plausibly you can focus on getting AI to learn human values and extrapolate them to something like our ‘Coherent Extrapolated Volition’. Related MIRI paper. In addition to the CEV being constructible, it seems like we need to assume at least one of two things: either the CEV of most subsets of humanity matches the CEV of humanity as a whole, or we have to assume the people controlling AI will altruistically choose to encode humanity’s CEV instead of their own.
  3. There are plausible functional decision-theoretic reasons to selfishly want to be in the coalition that ‘robustly cares about the suffering of less powerful agents’.
  4. Some people rate especially reviled prisoners as less deserving of concern than plants. The distressing evidence is not limited to our treatment of animals.

A 2016 study that asked US participants to rate 30 different entities by the “moral standing” that they deserved found that “villains” (murderers, terrorists, and child molesters) were deemed by participants to deserve less moral standing than “low-sentience animals” (chickens, fish, and bees), with mean scores of 1.77 and 2.64 respectively on a scale from 0 to 9. Indeed, they were deemed to deserve less moral standing than non-sentient “environmental targets” (3.53) or plants (2.52).

Crazy Life Advice

What is some ‘crazy’ life advice that is plausibly a good idea? I am included stuff from ‘real crazy’ to ‘a little offbeat’. Here are some ideas of various spiciness:

Finance:

1) Borrow money -> bet on red. If you lose declare bankruptcy. – This one seems possibly +EV for people with low current net worth.

2) Get married to avoid college debt since it drops expected family contribution to zero – just a 10/10 idea?

3) Wall Street bets as a defense against insider trading claims. – If you suddenly bet a lot on options when you have info you will probably get caught insider trading. But if you have a history of crazy options trading you can say the bet that paid off was just a hunch too.

4) Invest a large amount of your net worth in plausibly AI entangled companies – I shifted most of my investments into this portfolio after gpt-3. Singularity is maybe near. Sorry EMH.

5) Greencard lottery every time even if you don’t plan to move to the USA – The value of a green card is high. Depending on your country and situation you might have a 0.2-2%+ chance to get the green card. If you win the lottery reassess your options.

6) Facilitate moving money to advantaged donors – The benefits of donating to charity vary a huge amount based on who donates the money. You can double your money if you get an employer match. The tax benefits of donating vary quite a bit. You get no benefit as an American if you take the standard deduction. Tax advantages also vary a lot by county. In France tax credits are calculated at 66 percent of the value of the donation, and an individual’s total tax credits for one year may not exceed 20 percent of their taxable income. There are even more crazy ideas where you try to buy ‘lottery tickets’ options and only donate the ones that go up. Here is a google collab where you can play around with the numbers:

Other:

1) Make decisions with double-blind 2nd price auctions – The Beeminder couple does this. Description.

2) Grad School as a way to get 2+ years of slack while you work on a different project – I wish I had done this tbh. Here is Andew Critch’s writeup.

3) Selective Radical Honesty – Being radically honest in general seems like a mistake. Very few people can make it work. But it is practical to practice radical honesty with at least some of your closest relations. By this, I mean that you will freely explain your world models and actual feelings without much of any editing. I will say you should probably hold back some of your mental models. If you share all the ways you evaluate people you risk being Goodharted. I am extremely open and honest with my primary partner and some of my friends.

4) Van / Slackmobile life – If you are skilled you can build a rather livable slackmobile for 10K. Doing it with 20-30K is easier but still affordable for many people. The quality of life in a modified box truck is surprisingly high. Here are some details on building a slackmobile including a full budget.

ht: Noah Kreuter, Brian Liu, Char/Astria, Robert Sharpobject

I will update this list if I find more interesting ideas.

Replaceability Concerns and Possible Responses

How concerned should we be about replaceability? One reason some people don’t seem that concerned is that the leaders of EA organizations reported very high estimates for the value of their new hires. About twenty-five organizations answered the following question:

For a typical recent Senior/Junior hire, how much financial compensation would you need to receive today, to make you indifferent about that person having to stop working for you or anyone for the next 3 years?

The same survey showed that organizations reported feeling more talent constrained than funding constrained.

On a scale of 0 to 4, respondents saw themselves as 2.8 constrained by talent and 1.5 by funding, similar to last year and consistent with the donation trade-off figures.

The 2019 survey replicated the results on talent vs funding constraints. It also had useful information on which skills organizations felt were in demand.

Replaceability

On the other hand, After one year of applying for EA jobs: It is really, really hard to get hired by an EA organization is one of the highest upvoted threads on the EA forum. The applicant had a very strong resume and applied for twenty positions. He was rejected by all of them. He lists all the positions he applied for. The author claims his situation is not unusual:

I know several people who fulfil all of the following criteria:

– They studied/are studying at postgraduate level at a highly competitive university (like Oxford) or in a highly competitive subject (like medical school)

– They are within the top 5% of their course

– They have impressive extracurricular activities (like leading a local EA chapter, having organised successful big events, peer-reviewed publications while studying, …)

– They are very motivated and EA aligned

– They applied for at least 5 positions in the EA community and got rejected in 100% of the cases.

He had gotten encouragement from some well informed EA leaders. Said leaders seemed pretty surprised at how badly his job search went. The comments, in general, are very informative. Several people give full details of their EA job searches. However, I will quote from one comment by the author:

People working at EA organisations, sometimes in senior positions, were surprised when they heard I didn’t get an offer (from another organisation). I’d guess around half the organisations I applied to were “surprised about the very strong field of applicants”. Past messaging about talent constraints probably also plays a role. As a result, career advice in the EA community can be overly optimistic, to a point where more than one person seriously encouraged me to apply for the COO position at OpenPhil (a position which went to the person who led the operations for Hillary Clinton’s election campaign(!)). At least a year ago, when I was talking to dozens of people for career advice, I got the impression that it should be comparatively easy to get hired by an EA organisation.

I assume EA leaderships have become more conservative in the advice they give since the thread was very commonly read. But I don’t think the fundamentals of the EA job market have changed very much.

It is somewhat surprising the EA job market is so competitive. The community is not terribly large. Here is an estimate:

This suggests that there are roughly 2000-3000 highly engaged EAs in total.

Likewise the estimated size of the EA community more broadly, worldwide, is only 5000-10,000 or about the size of a small town or university.

This suggests to me a very large fraction of highly engaged EAs are interested in direct work.

Two Perspectives

One point of view is roughly the following: It is extremely important to get the best person for the job. Organizations highly value their recent hires but seemingly lack the capacity to effectively ramp up hiring. Performance in many positions varies a huge amount. Therefore if there is even a chance you are the best fit for a job you should apply. Organizations are slow to fire bad performers. People are bad at judging their aptitude. If we discourage applicants, there will be very strong applicants too humble to apply. If people build career capital in hopes of getting an EA job they are unlikely to be hurt even if they never do direct work. The expected value of searching over a larger set of applicants is high.

A different point of view is that it looks like a huge percentage of engaged EAs want to do direct work. Engaged EAs as a group are quite talented. Unless you are an unusually good fit for a direct work job that job should, and probably will, go to someone else. There are some direct work jobs that require genuinely rare skills. But if the job seems like it could be done by an average Princeton grad, then it will probably have qualified applicants and you are unlikely to be the best fit. The risk of discouraging people is real but there are real costs to giving people the wrong impression of their prospects. People can feel betrayed and lied to. Engaged EAs might become discouraged or decide they cannot trust the EA information ecosystem. The early years of a person’s career can be very impactful. It is unwise to encourage people to plan for careers that probably won’t work out.

Ideas

Let us imagine the second perspective is correct and think of ideas of what to do. Of course, you can still do direct work if there is a good fit for your specific abilities and experiences. You can also look for career capital you are especially suited to build quickly (for example trying to pivot into AI safety). But other effective altruists may think of the same plan.

One scalable option is clearly earning to give. If you earn to give and donate X dollars per year you basically increase the total amount donated to effective charities by X dollars. The marginal value of EA donations does decrease as more total dollars are donated. But we seem pretty far from exhausting opportunities for effective giving. Overall earning to give does not suffer much from replaceability issues.

Another option is to be willing to make choices other effective altruists seem unwilling to make. For reasons I do not fully understand it seems like few EAs want to try starting an organization even though many (perhaps most) want to work at one. Having more organizations seems useful to me. It would add healthy competition and there is definitely the talent pool. Perhaps potential founders think they cannot get funding? On net trying to found an org, conditional on having a strong team and vision, seems like a plausibly high EV idea. Founding an org probably decreases the odds of other orgs being funded so there are some replaceability concerns.

One could also do EA work for low or potentially no wages. Starting an org with no seed funding would effectively require doing this for some amount of time. Any EA org that is hiring constrained should not be offering very low wages. But perhaps there are ways to produce effective research for no compensation. Doing this with no structure might be an ineffective use of human capital. Organizing volunteers can be challenging but perhaps volunteer-run EA orgs could be created?

You could consider taking on legal risks. Various forms of non-violent direct action might be an effective way to get high-impact political changes. This view seems most common in the animal rights/welfare community. The number of people willing to take legal risks is quite low so replaceability concerns are negligible.

There are some ways to ‘donate’ things besides money. Various organ donation charities have been given some EA funding. Donating a kidney, or a portion of a liver, is not replaceable. Various personal forms of mentorship are probably not replaceable either. However, it’s hard to imagine how one could make anything like a career out of these opportunities. But it would be valuable to keep them in mind.

Engaging Seriously with Short Timelines

It seems like transformative AI might be coming fairly soon. By transformative AI I just mean AI that will rapidly accelerate economic and technological progress. Of course, I am not ruling out a true singularity either. I am assuming such technology can be created using variants of current deep learning techniques.

Paul Christiano has written up arguments for a ‘slow takeoff’ where “There will be a complete 4-year interval in which world output doubles, before the first 1-year interval in which world output doubles.” It is unclear to me whether that is more or less likely than a rapid and surprising singularity. But it certainly seems much easier to prepare for. I don’t think we have a good model of what exactly will happen but we should prepare for as many winnable scenarios as we can.

What should we do now if we think big changes are coming soon? Here are some ideas:

Work on quickly usable AI safety theory: Iterated Amplification and Distillation – Assuming timelines are short we might not have time for provably safe AI. We need AI-safety theory that can be applied quickly to neural nets. Any techniques that can quickly be used to align GPT-style AI are very high value. If you have the ability work on them now.

IDA is a good framework to bet on imo. OpenAI seems to be betting on IDA. Here is an explanation. Here is a lesswrong discussion. If you are mathematically inclined and understand the basics of deep learning now might be a great time to read the IDA papers and see if you can contribute. Other frameworks that can be used in the short term to align real systems would also be valuable.

Get capital while you can – Money is broadly useful and can be quickly converted into other resources in a critical moment. At the very least money can be converted into time. Be frugal, you might need those resources soon.

Besides, the value of human capital might fall. If you have a lucrative position (ex: finance or tech) now is a good time to focus on making money.  The value of human capital might fall. Investing in your human capital by going back to school is a bad idea.

Invest Capital in companies that will benefit from AI technology – Tech stocks are already expensive so great deals will be hard to find. But if things get crazy you want your capital to grow rapidly. I would especially recommend hedging ‘transformative AI’ if you will get rich anyway if nothing crazy happens.

I am doing something like the following portfolio:

ARKQ – 27%
Botz – 9%
Microsoft – 9%
Amazon – 9%
Alphabet – 8% (ARKQ is ~4% alphabet)

Facebook – 7%
Tencent – 6%
Baidu – 6%
Apple – 5%
IBM – 4%

Tesla – 0 (ArkQ is 10% Tesla)
Nvidia – 2% (both Botz and ARKQ hold Nvidia)
Intel – 3%
Salesforce – 2%
Twilio – 1.5%
Alteryx – 1.5%

BOTZ and ARKQ are ETFs. They have pretty high expense ratios. You can replicate them if you want to save 68-75 basis points. Botz is pretty easy to replicate with only ~10K.

Several people think that land will remain valuable in many scenarios. But I don’t see a good way to operationalize a bet on land. Some people have suggested buying options since it is easier to get leverage and the upside is higher. But getting the timing right seems tricky to me.

Physical and Emotional Preparation – You don’t want your body or mind to fail you during the critical period. Invest in keeping yourself as healthy as possible. If you have issues with RSI work on fixing them now so you can give future developments your full attention.

You can also invest in mental preparation. Meditation is high value for many people. A systematic study of rationality techniques could be useful. But keep in mind that it is easy to waste time if you casually approach training. Track your results and have a system!

In general, you want to make these investments now while you still have time. Keep in mind these investments may conflict with attempts to increase your monetary capital. I would prioritize keeping yourself healthy. Make sure you are getting good returns on more speculative investments (and remember many self-improvement plans fail).

Political Organizing and Influence – Technological progress does not intrinsically help people. Current technology can be used for good ends. But they can also be used to control people on a huge scale. One can interpret the rise of humanity as singularity 1.0. By the standards of the previous eras change accelerated a huge amount. ‘Singularity 1.0’ did not go so well for the animals in factory farms. Even if we align AI, we need to make the right choices or singularity 2.0 might not go so well for most inhabitants of the Earth.

In a slow takeoff, human governments are likely to be huge players. As Milton Friedman said, “Only a crisis – actual or perceived – produces real change”. If there is a crisis coming there may be large political changes coming soon. Influencing these changes might be of high value. Politics can be influenced from both the outside and the inside. Given the political situation, I find it unlikely an AI arms race can be averted for too long. But various sorts of intergovernmental cooperation might be possible and increasing the odds of these deals could be high value.

Capabilities Research – This is a sketchy and rather pessimistic idea. But imagine that GPT-3 has already triggered an arms race or at least that GPT-4 will. In this case, it might make sense to help a relatively values-aligned organization win (such as OpenAI as opposed to the CCP). If you are, or could be, very talented at deep learning you might have to grapple with this option.

What ideas do other people have for dealing with short timelines?