AISFP, a two week summer program designed to increase participants' ability to do technical research into the AI alignment problem, will take place in the San Francisco Bay Area from June 27 to July 14.
The intent of the program is to boost participants as far as possible in four skills:
- The CFAR applied rationality skillset, including both what is taught at our intro workshops, and more advanced material from our alumni workshops.
- Epistemic rationality as applied to the foundations of AI, and other philosophically tricky problems — i.e., the skillset taught in the core LW Sequences (E.g.: reductionism; how to reason in contexts as confusing as anthropics without getting lost in words).
- Technical forecasting in AI as well as AI alignment interventions (e.g., the content discussed in Nick Bostrom’s book Superintelligence).
- The ability to do AI alignment-relevant technical research, while reflecting on the cognitive habits involved. We will give crash courses in: reflection, logical uncertainty, and decision theory.