Discussions about dangers keep heating up:

Historian Yuval Noah Harari has big fears about the impact of LLMs, despite their technical limitations – or maybe because of those.

Godfather of AI and inventor of Deep Learning, Geoffrey Hinton, quits Google, so that he can openly voice his mind and concerns about AI. In particular, he’s cited by The Conversation to stop arguing about whether LLMs are AI, acknowledge that it’s some different kind of intelligence, stop comparing it to human intelligence, and focus on the real-world impacts that it’ll have in the near term: Job loss, misinformation, automatic weapons.

Meanwhile on Honestly with Bari Weiss OpenAI CEO, Sam Altman, (read or listen) does not agree with the open letter asking his company to pause and let competitors catch-up, but he does share the concerns about downsides of LLM and asks for regulation and safety structures lead by the government, similar to nuclear technology or aviation. OpenAI tries to be mindful of AI security and alignment1, but Altman worries that competitors that are trying to catch up, cutting corners – indirectly making an argument for OpenAI to pause after all, so that competitors can catch-up without having to cut these corners. However, he also worries that other country efforts (China) ignore these questions. He also alludes that AI companies should not follow traditional structures, and democratically elected leaders for them would make more sense.

Other good reads and listens on what’s happening lately:

  • Hard Fork discussion on Drake: Loved the discussion on what AI generated music by fans could mean for artists. There are certain songs I love listening to on repeat and a few years ago there was this “infinite song” web app floating around that stitched a song into a semi-generative infinite loop. I’d love a version of that which is using generative AIs, and possibly mixing different songs together.
  • MLC TestFlight runs an LLM locally on a modern iPhone. Remarkable that it works. While GPT4 is leaps and bounds ahead in terms of quality, those locally run model could catch up. LLMs likely follow an S-curve in terms of their quality. Makes me wonder if GPT4 is near the top or still near the start or middle.
  • Ars Technica - Report describes Apple’s “organizational dysfunction” and “lack of ambition” in AI: A rather damning report of how Apple is falling behind other companies in the explorations of the leaps that AI is taking, and the new paradigms it opens up. LLM-based improvements on iOS are expected earliest next year, i.e., September 2024. I’ve given up on Siri a while ago.
  • Is your website in an LLM: Great journalism here on digging into what makes up (some) LLMs. I was happy to see that a bunch of my blog got slurped up, as I hope it might help some developer encountering an obscure problem – though acknowledgement and source links would be appreciated.
  • Surprising things happen when you put 25 AI agents together in an RPG town: Report on the fascinating emergent behaviours of running multiple LLM-agents. Hard Fork has a fun discussion of it, too. Really curious how LLMs will impact games.
  1. AI alignment means making sure that, if AI exceeds human intelligence, it’s goals are aligned with the goals of humanity – and doesn’t treat us like ants.