Classified Thread 2 Best In Classified by Scott Alexander – Scott is promoting a project to accelerate the trend of rationalists living near each other. There are four houses available for rent near Ward Street in Berkeley. Ward street is currently the rationalist hub in the Bay. Commenters can advertise other projects and services.
What Value Subagents by G Gordan (Map and Territory) – Splitting the mind into subagents is a common rationalist model (links to Alicorn, Briene Yudkowsky, etc). However the author preferred model is a single process with inconsistent preferences. Freud. System 1 and System 2. The rider and the Elephant become one. Subagents as masks. Subagents as epicycles.
Be My Neighbor by Katja Grace – Katja lives in a rationalist house on ward street in Berkeley and its great. The next step up is a rationalist neighborhood. Katja is promoting the same four houses as Scott. Be her neighbor?
Meditation Insights Suffering And Pleasure Are Intrinsically Bound Together by Kaj Sotala – The concrete goal of meditation is to train your peripheral awareness. Much suffering comes from false promises of pleasure. Procrastinating to play a videogame won’t actually make you feel better. Temptation losses its power once you truly see the temptations for what they truly are.
Change Is Bad by Zvi Moshowitz – “Change space, like mind space, is deep and wide. Friendly change space isn’t quite to change space what friendly mind space is to mind space, but before you apply any filters of common sense, it’s remarkably close.” A long list of conditions that mean change has lower expected value. Why we still need to make changes. Keep your eyes open.
Knowing How To Define by AellaGirl – “These are three ways in which a word can be ‘defined’ – the role it plays in the world around it (the up-definition), synonyms (lateral-definition), and the parts which construct the thing (down-definition).” Applications to morality and free-will.
Openai Baselines PPO by Open Ai – “We’re releasing a new class of reinforcement learning algorithms, Proximal Policy Optimization (PPO), which perform comparably or better than state-of-the-art approaches while being much simpler to implement and tune. PPO has become the default reinforcement learning algorithm at OpenAI because of its ease of use and good performance.”
Conversation With An Ai Researcher by Jeff Kaufman – The anonymous researcher thinks AI progress is almost entirely driven by hardware and data. Back propagation has existed for a long time. Go would have taken at least 10 more years if go-aI work had remained constrained by academic budgets.
An Argument For Why The Future May Be Good by Ben West (EA forum) – Factory farming shows that humans are deeply cruel. Technology enabled this cruelty, perhaps the future will be even darker. Counterargument: Humans are lazy, not evil. Humans as a group will spend at least small amounts altruistically. In the future the cost of reducing suffering will go down low enough that suffering will be rare or non-existent.
Tranquilism by The Foundational Research Institute – A paper arguing that reducing suffering is more important than promoting happiness. Axiology. Non-consciousness. Common Objections. Conclusion.
Why I Think The Foundational Research Institute by Mike Johnson (EA forum) – A description of the FRI. Good things about FRI. FRI’s research framework and why the author is worried. Eight long objections. TLDR: “functionalism (“consciousness is the sum-total of the functional properties of our brains”) sounds a lot better than it actually turns out to be in practice. In particular, functionalism makes it impossible to define ethics & suffering in a way that can mediate disagreements.”
The Ominouslier Roar Of The Bitcoin Wave by Artem and Venkat (ribbonfarm) – A video visualizing and audiolizing the bitcoin blockchain. A related dialogue.
Politics and Economics:
Triggered by Waking Up with Sam Harris – “Sam Harris and Scott Adams debate the character and competence of President Trump.”