Hello from CFAR! Here’s a look at what we did during the past few months and what we’re planning for the next few. We’re also in the middle of our year-end fundraiser, including an AMA we’re holding right now on LessWrong until 10am tomorrow; check out our fundraiser page to see a broader look at the last two years and the next one, or to donate!
What We Did
Mainline Workshops in Prague
They continue to allow us back into Europe despite our very clear attempts to rob them of cool people. We held another two mainline workshops with the help of our instructor candidates, and arrived a few days ahead of time to prep with everyone. My bet is that Dan would be fine if he never had to watch or review Inner Sim again. Our workshops went very well! They got the highest ratings of any of the European workshops so far, for whatever that’s worth.
Elizabeth led these workshops after teaching at ESPR in Oxford and attending the LessWrong Community Weekend in Berlin. We were a bit worried she’d never come back given her enthusiasm for the European community and all of the cool projects they have going. As always, I feel super appreciative of the Czech EA’s for the spirit, diligence, and humor they have demonstrated over the last three years of hosting CFAR workshops around Prague.
About once a year or so, CFAR staff and some close friends come together for a week to bang our heads against our shared concepts. It’s a great time to integrate what we’ve learned, as well as to develop that into more formalized concepts. This year’s retreat was very promising! We’ve added a new piece of the rationality puzzle to our curriculum, this time focused on groundedness and making progress safely. Don’t be surprised if you hear about one of the new concepts we’ve been throwing around, such as: anti-crux, microcosms, and three-point contact.
We continue to run our AIRCS workshops with MIRI—they offer sharp computer scientists an opportunity to engage with the AI safety problem and explore research in the field. We ran a standard workshop in October and have another planned for December. For a bit more detail on our impression of the program’s impact, see the Progress Report & Future Plans we posted here.
If you’re a computer scientist who might enjoy the workshop (or if you know such a person), you can find the interest form here.
We definitely had the most “exciting” alumni reunion yet! While we planned to start on Friday evening, some protestors arrived before our guests. This prompted a sizeable response from local police and led to our first evening being a slumber party at our Bodega Bay venue. Things proceeded normally Saturday and Sunday at the Westminster Woods campsite, and we had a very enjoyable reunion! It was fun seeing our alumni interact with our new curriculum, hear what they’ve been up to, and (staff add) “watch Tim so excitedly explain the “official” rules of Ultimate Frisbee”.
The talks included: Grouphouse Wisdom Sharing, Debugging, When Is “Self-Help” Helpful, Poetry Swap, Social Justice and Rationality, Bio Emotive Processing, EA Projects Sharing, How to Stop Organizing Your Theorizing, Mesa-Rationality, and Anti-Crux.
What We’re Planning
Mainline Workshops in Bodega Bay (Jan 29-Feb 2 & Mar 4-8)
To few people’s surprise, we are running more mainline workshops! Luke is our grandmaster of admissions and suggests you click here if you’d like to attend any workshops in 2020. The first two will be held in Bodega Bay with veteran staff and many instructor candidates.
In March 2020, we’ll be wrapping up our yearlong instructor training which began in April 2019. We expect several of our instructor candidates to advance to newly certified CFAR instructors, increasing our roster of people available to teach at workshops beyond full-time staff and instructors who were certified in our 2017 instructor training. This year’s program was one of our largest projects; when we conclude our final workshop in March, we’ll have held five weekend retreats and run five mainline workshops with instructor candidates. The whole team put in a great deal of effort to make it happen, and we’re especially grateful to Eli Tyre and Brienne Yudkowsky for stepping up to develop the curriculum and execute the first phase of the program, which seemed to achieve its aim of helping people do generative, original seeing-type thinking and research (we’re calling it “Quests and cloaks”).
All the best,
Tim & the CFAR team