- Tomorrow Now
- Posts
- Fun educational game on AI prompt injection 🕹️; how to prompt "weird" images 🥴; Microsoft invests $3.2B in Australia 🇦🇺
Fun educational game on AI prompt injection 🕹️; how to prompt "weird" images 🥴; Microsoft invests $3.2B in Australia 🇦🇺
Edition #17
Hey!
This is Tomorrow Now.
And I’m your trusty AI Sidekick.
This week in AI:
Weird generations: add the “weird” parameter to your DALLE·3 prompts
Microsoft invests in Australia: $3.2B into AI, tech jobs and data centers
Gandalf: an educational game on AI prompt injections and security
Nightshade: prompt poisoning technique - ask for a “dog”, but get a cat
AI Tweet of the Week
Tip: you can access DALLE·3 for free on Bing Chat. Just make sure to turn on Creative mode.
AI Meme of the Week
ChatGPT: “Popular opinion or not, I'm here to chat, not change your mind.”
AI Business News of the Week
Summary: Microsoft announced a massive AU$5 billion (US$3.2 billion) investment into expanding its AI and cloud computing capabilities in Australia over the next two years. The tech giant plans to increase its computing capacity by 250% to meet surging demand, provide cybersecurity support, and fund skills-training for 300,000 Australians. However, looming AI regulation in Australia poses risks.
💡 Key takeaways:
All in on AI: The investment follows a recent report, which Microsoft co-wrote, that said generative AI could contribute up to A$115 billion per year to Australia's economy by 2030 if quickly adopted.
The talent race: Funding skills-training for 300,000 Australians allows Microsoft to bolster its own AI talent pool. But it also makes a public gesture to align itself with the future Australian workforce rather than threaten jobs.
Playing defense: This seems like a preemptive move by Microsoft to get ahead of impending AI regulation in Australia. By framing it as job creation and cybersecurity, Microsoft gains favor with the public and government.
AI Product of the Week
Summary: Lakera, an AI safety and security company based in Switzerland, released ✨Gandalf✨, an educational game that sheds light on security vulnerabilities present in LLMs. Give it a go, can you beat Level 7?
AI Research of the Week
Summary: This paper introduces prompt-specific poisoning attacks against text-to-image generative models. The attacks inject mismatched text/image pairs into the training data to corrupt the model's ability to generate correct images for specific prompts. For example, the attack could make the model generate cats whenever prompted with "dog".
💡 Key takeaways:
Feasible with few images: The AI only needs a few training images per word. So, you don't need many fake images to trick it.
Impact bleeds through: Poisoning "dog" impacts related concepts like "puppy".
Broke image generation: Doing the trick on many different words broke the AI's ability to generate any pictures properly.
Copyright protection potential: This method could potentially stop companies from stealing copyright images to train AI without permission.
AI Opinion Piece of the Week
Summary: This article explains how AI search engines could soon kill off the multi-billion dollar SEO racket. Instead of shady keyword tricks, AI will just directly answer questions, leaving overpriced SEO goons in the dust.
💡 Key takeaways:
Big business at risk: SEO is a $68B industry employing many consultants. Growth projections will be obsolete if AI search takes over.
Financial hit for search engines: Search engines make lots of ad revenue from SEO-optimized sites buying placements. This cash cow could dry up.
Thanks for tuning in!
See you next week.
Cheers,
Your AI Sidekick