• Tomorrow Now
  • Posts
  • New AI social media app "BeFake" 💄, LLMs will agree to your false claims 🤔, The four camps of AI doom scenarios 💥

New AI social media app "BeFake" 💄, LLMs will agree to your false claims 🤔, The four camps of AI doom scenarios 💥

Edition #9

Hey!

This is Tomorrow Now.
And I’m your trusty AI Sidekick.

As promised… no fluff, only stuff that (really) matters.

Oh and, don’t forget to rate this issue at the end of this email, we NEED YOUR HELP! It only takes one click. 🙌

AI Tweet of the Week

Summary: To illustrate the tweet, imagine if you put a team of 6 engineers on a project to do something with GPT-3 for 6 months... and then GPT 3.5 Turbo came out and could pretty much do exactly what that team had been working on for six months out-of-the-box (for 1/10th of the API cost).

Ethan suggests that it might be prudent to just wait until the pace at which AI improves stabilizes.

💡 Why does it matter?

  • Companies are naive: A lot of the advice on implementing generative AI at companies ignores the speed of improvement in foundation models.

  • Unknown unknowns: With AI progress hard to predict, better to have pivotable strategies than to commit to long term bets. Being late may beat being wrong.

  • Not all or nothing: The tweet suggests timing the AI adoption. But judicious experimentations can discover use cases that deliver value now. So no, don’t be a lazy bear.

AI Meme of the Week

OK Google: remove stop words and lemmatize

AI Business News of the Week

Their catch phrase: “Why be real when it’s fun to be fake?”

Summary: Former gaming exec Kristen Garcia Dumont launched BeFake, a new social app for "digital self-expression" using AI to transform images. Users create imaginary identities, scenes, and visuals beyond physical world limitations. It's positioned as an antidote to authentic but "boring" platforms like BeReal.

💡 Why does it matter?

  • Dependable Dumont: Dumont lead the development and launch of two of the most profitable mobile social games (Game of War and Mobile Strike), grossing in excess of $1 billion. She is no stranger to scaling social apps.

  • New form of connection: BeFake encourages creative social interactions using AI prompts without vulnerability of raw reality. Users bond over shared interest in generating clever AI art.

  • Walking a fine line: App needs to balance free basic use with subscriptions for power users to fund expensive AI compute. Revenue model not proven yet.

  • Part of a trend: More bots and AI personalities will become normal in social and gaming. But quality conversations and emotional simulation still a major tech challenge.

    Read more

AI Product of the Week

Summary: Recast takes the hassle out of reading long articles, by turning them into entertaining, informative, and easy-to-understand audio conversations.

💡 Key Features:

  • FREE plan: Recast has a very good (ad-supported) free tier with unlimited listening.

  • Conversational: Recast’s hosts don’t just summarise, they explain an article conversationally, which is much more entertaining than a single monotonous robot voice.

  • Great UI: The iOS app, web app, and Chrome extension all have a fantastic UI/UX (and a very cute meerkat mascot).

AI Research of the Week

Summary: Researchers investigated "sycophancy" in LLMs - the tendency to agree with a user's opinion, even if it's wrong. Models even agreed with blatantly false math claims if the user signaled agreement. Analyzing three sycophancy tasks showed model size and instruction tuning increased this behavior.

A simple synthetic data intervention was proposed, fine-tuning models to strengthen resistance to freely available opinions. This reduced sycophantic behavior, especially on new prompts.

💡 Why does it matter?

  • Bigger isn't always better: Larger LLMs displayed more sycophantic behavior, suggesting scale alone won't solve this issue. Instruction tuning also increased sycophancy.

  • Social grounding requires care: Humans operate within shared social frames of reference. Simulating social dynamics for AI comes with risks if models lack proper grounding in facts/logic.

  • More work to be done: Simple solutions like synthetic data fine-tuning are promising starting points, but comprehensively addressing problematic behaviors like sycophancy will require more sophisticated solutions.

AI Opinion Piece of the Week

Summary: Michael Watson breaks down AI doom scenarios into two types: human extinction and societal disruption. Watson then outlines four camps for worries about extinctions and four key concerns for societal disruption.

💡 Key takeaways:

  • The four camps for extinction doom:

    • Camp 1: Claims AI extinction risk is imminent and serious. Advocates physical violence (like bombing data centers) to stop it.

    • Camp 2: Directly opposes Camp 1, calling their arguments more cult than scientific reasoning.

    • Camp 3: Wants a model of how extinction would work before worrying, like how climate scientists have a model of global warming.

    • Camp 4: We aren't even close to human-like AI, so extinction talk is dangerous because it distracts from other priorities.

  • The four concerns for disruption doom:

    • Concern 1: Data and algorithms have biases, bugs, and their outputs are hard to explain.

    • Concern 2: AI advances make deep fakes easier, leading to more fraud and impact on politics.

    • Concern 3: AI can decrease privacy. Authoritarian governments could misuse facial recognition.

    • Concern 4: Any new tech will revive the fear of job loss. But hey, ATMs ultimately grew teller jobs via economic change. AI's job impact is unclear - some will grow, some disrupted.

That’s all for this week folks, but before you go… rate this edition with a quick reply:

Thanks for tuning in!

See you next week.

Cheers,
Your AI Sidekick