• Tomorrow Now
  • Posts
  • New FREE text-to-video tool 🎥, Fake AI images in Israel-Hamas conflict ❌, Tech giants back Karya Inc. from India 🇮🇳

New FREE text-to-video tool 🎥, Fake AI images in Israel-Hamas conflict ❌, Tech giants back Karya Inc. from India 🇮🇳

Edition #18

Hey!

This is Tomorrow Now.
And I’m your trusty AI Sidekick.

This week in AI:

  • Almost perfect: find the mistakes in this 100% AI generated image

  • Tech giants back Karya Inc: startup that rewards the impoverished in India for multi-lingual data annotation

  • GenmoAI: new tool on the market to generate text-to-videos for FREE

  • AI disinformation in Israel-Palestine: the technology’s impact on the conflict is far more subtle that we thought it’d be.

AI Tweet of the Week

WE DARE YOU: There are a few mistakes, find ‘em and tell us in a reply 

(no checking the comments)

AI Meme of the Week

And I don’t even like apples

AI Business News of the Week

Summary: Google and Microsoft are partnering with Karya Inc., an AI data collection startup founded by 27-year-old Stanford alum Manu Chopra. Karya stands out by paying its mostly rural, female Indian workers up to 20 times the minimum wage. This investment is a strategic move to enhance AI tools for the vast non-English market.

💡 Why does it matter?

  • Massive potential market: With India's push for AI in various sectors, Silicon Valley's collaboration with Karya could lead to a generative AI model supporting 125 Indian languages, tapping into a market of nearly one billion potential users.

  • Data for social good: Beyond improving AI, Karya's founder is motivated by social impact. Chopra aims to help impoverished workers save the $1,500 needed to enter India's middle class. He says it can otherwise take them 200 years to accumulate those savings.

  • Karya’s numbers: Karya has already sourced data from 85 districts for Google's goal of all 700+ districts in India, provided over 30,000 Indian women's "gender intentional" data in 6 languages for the Gates Foundation, and increased Microsoft's Marathi speech data from 165 to 10,000 hours.

AI Product of the Week

Summary: Genmo is a platform for creating and sharing interactive, immersive generative art with text prompts. Go beyond 2D images on Genmo by creating videos, animations and more. Get started for FREE ⬇️

AI Research of the Week

Summary: This paper introduces a new method called SmoothLLM to protect LLMs from adversarial attacks. By creating multiple 'perturbed' versions of an input (which is essentially choosing random parts of an input and changing its characters to something random) and aggregating their outputs, SmoothLLM significantly reduces these attacks' success rate - from 98% to under 1% in some cases!

💡 Key details:

  • Expensive attack, cheap defense: A GCG attack uses ~256,000 queries to produce a single adversarial result, while SmoothLLM uses under 20 queries to defend, making SmoothLLM's 5-6 orders of magnitude more efficient than the attack.

  • Works a treat: With only 6 randomized copies, SmoothLLM slashes the attack success rate 50-100x for Llama2 and Vicuna. It even defends closed-source LLMs like GPT-4.

AI Opinion Piece of the Week

Summary: Contrary to expectations, generative AI has not been a major source of fake images in the recent Israel-Hamas conflict. While some AI-generated propaganda exists, most disinformation relies on traditional manipulation like out-of-context images. The impact has been subtle, with generative AI acting more as a "specter" that sows distrust of all online evidence.

💡 Why does it matter?

  • More mundane fakes abound: There is already so much disinformation that AI-generated content struggles to gain traction. Simple editing tricks often spread more easily than deepfakes.

  • AI images used for drumming up support: GenAI has been leveraged more for creating the impression of wider support through fake cheering crowds etc. This is not among the most dangerous applications.

  • Sowing seeds of doubt: By prompting suspicions of forgery, the mere potential of AI disinformation makes people distrust all evidence. This "specter" effect is arguably generative AI's biggest influence so far.

  • Tracking a familiar arc: Like most new technologies, generative AI is following a trajectory where early fears far exceed current realities. Concrete impacts remain limited relative to the scale of overall disinformation.

Thanks for tuning in!

See you next week.

Cheers,
Your AI Sidekick