Rational Feed

Rationalist:

What Do We Mean By Meetups by mingyuan (LW 2.0) – Ming interviewed members of eight rationality groups. What is the purpose of a rationality group and how do they create value? Which factors make a rationality group work?

Backchaining In Strategy by David Kingsley (LW 2.0) – Backchaining is successively working backwards from the goal in order to determine which actions to take. Backchaining can be useful and helps focus your actions on actually achieving the objective. However backchaining runs into problems if the goals are long-term in nature or involves too many unknown unknowns.

Anti-anti-natalism by Jacob Falkovich – Anti-natialism is the view that its bad to bring new humans into existence. “I tried to make sense of antinatalism, and I think it’s bad philosophy. But I also think it’s bad economics, and plays on the widely held intuition that an extra person makes the rest of humanity worse off by taking up some space, some resources, some piece of the pie that would have gone to others. I hold that this zero-sum view is ignorant of the reality of the modern economy. Having children is good for the children, good for you, and good for the world.”

AI:

Stable Pointers To Value Ii Environmental Goals by abramdemski (LW 2.0) – Three approaches for robust value learning: Standard reinforcement learning (RL) frameworks, including AIXI, which try to predict the reward they’ll get and take actions which maximize that expectation. Observation-utility agents (OU), which get around the problem by assessing the future plans with the current utility function subsystem rather than trying to predict what the subsystem will say. This removes the incentive to manipulate what the subsystem will say. Approval-directed agents (AD) maximize human approval of individual actions, rather than planning to maximize overall approval.

Politics and Economics:

Loans For Ladies by Chris Stucchio – Slides from a talk on the ethics of applied machine learning. Many algorithms are known to discriminate against various groups even if the algorithms are not intended to discriminate. The main conflict is between ‘individual fairness’ and ‘group fairness’. Significant discussion of racial stereotypes and how San Fransisco, the source of most AI ethics, values not noticing certain facts.

Author: deluks917

"Whatever you did for one of the least of these brothers and sisters of mine, you did for me" I am trying to help animals and increase the odds of a good future. Stereotypical nerdy transgirl. Right now interested in crypto.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s