Process
Status Items Output None Questions None Claims None Highlights Done See section below
Highlights
id793645900
The most powerful allies of industry were national Democrats who came out against the bill. But they had a big assist from “godmother of AI” and Stanford professor Fei-Fei Li, who published an op-ed in Fortune falsely claiming that SB 1047’s “kill switch” would effectively destroy the open-source AI community. Li’s op-ed was prominently cited in the congressional letter and Pelosi’s statement, where the former Speaker said that Li is “viewed as California’s top AI academic and researcher and one of the top AI thinkers globally.” Nowhere in Fortune or these congressional statements was it mentioned that Li founded a billion-dollar AI startup backed by Andreessen Horowitz, the venture fund behind a scorched-earth smear campaign against the bill. The same day he vetoed SB 1047, Newsom announced a new board of advisers on AI governance for the state. Li is the first name mentioned.
✏️ Agendas reign supreme. 🔗 View Highlight
id793645911
By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047. In other words, AI models large and small can both be risky — so we shouldn’t regulate either?
✏️ The age-old argument that the current thing doesn’t cover all possible instances of abuse, so therefore it should be killed outright. 🔗 View Highlight
id793646875
Supporters tend to be more worried about being too late than too early. Bill cosponsor Teri Olle, director of Economic Security California, said in a phone interview, “The last time we had this kind of a moment was with social media,” but “we blinked and as a result we are now still trying to pick up the pieces.”
id793647046
The debate playing out in the public square may lead you to believe that we have to choose between addressing AI’s immediate harms and its inherently speculative existential risks. And there are certainly trade-offs that require careful consideration. But when you look at the material forces at play, a different picture emerges: in one corner are trillion-dollar companies trying to make AI models more powerful and profitable; in another, you find civil society groups trying to make AI reflect values that routinely clash with profit maximization. In short, it’s capitalism versus humanity.
✏️ This is good. When innovation happens… when disruption happens… it’s the decision-makers that dictate what ends up happening. People, as people and human beings, are trying to reflect values that aren’t profitable; they’re about safety and protection. They’re not anti-tech, just anti-exploitation. But if they don’t have the power or the deciding factor.. if the companies can buy votes and support and influence, then that’s what happens. 🔗 View Highlight