• Tomorrow Now
  • Posts
  • Harvard AI Bot 🤖, UseChatGPT Browser Extension 🔥, Startup Breaks Fundraising Record đź’°, Detect Childhood Blindness Faster đź‘€

Harvard AI Bot 🤖, UseChatGPT Browser Extension 🔥, Startup Breaks Fundraising Record 💰, Detect Childhood Blindness Faster 👀

Edition #2

Hey!

This is Tomorrow Now,
And I’m your trusty AI Sidekick.

This week in AI:

  • Harvard brings AI to the classroom

  • A browser extension to use ChatGPT on the fly

  • Mistral AI raises $113 million with no product to show for

  • No-code AI model can detect childhood blindness in infants

  • Challenges to combating misinformation; what can AI do about it?

As promised… no fluff, only things that (really) matter.

AI Tweet of the Week

đź’ˇ What is this tweet about?

  • The CS50 course at Harvard will introduce AI tools to help students find bugs, explain code, and answer questions.

  • AI tools like ChatGPT and Copilot are "too helpful", so CS50Bot will lead students toward answers instead of giving them directly.

  • The bot will assist with finding bugs by simplifying potentially complex error messages and suggest “student-friendly solutions”.

  • Harvard hopes to reduce the time staff spend grading, allowing them to spend more meaningful time with students “akin to an apprenticeship model”.

AI Business News of the Week

Mistral AI, a new Paris startup, secured $113 million in funding just 4 weeks after founding, despite having no product.

The funding comes from investors like former Google CEO Eric Schmidt and French billionaire Xavier Niel among others. Mistral aims to combine scientific excellence (whatever that means), open-source technology, and social responsibility. It plans to launch a large language model in 2024 similar to ChatGPT.

đź’ˇ Why does this matter?

  • Market demand: the successful fundraising effort by Mistral AI showcases the market demand for AI solutions, its profitability and impact.

  • Big-name supporters: Mistral AI's ability to attract high-profile investors validates its approach to AI and its business-model (which sound a lot like OpenAI origins)

  • Smells fishy: while many investors are busy writing cheques, others reminisce the burns from all the money-pouring during the dot-com bubble that burst. Will the AI narrative be yet another fad? ahem… NFTs.

AI Meme of the Week

💡 Wait, humans DON’T have 16 and a half fingers??

AI Product of the Week

Summary: Use ChatGPT (Plugins & GPT-4), Bard, Bing Chat, and Claude on any website without copy-pasting. Write, rewrite, summarize, translate, explain, or reply to any text everywhere with one click.

đź’ˇ Feature highlights:

  • Shortcut: press CMD/ALT + J to launch the extension

  • Predefined pre-prompts: summarize, translate, change tone, key takeaways, explain etc.

  • Dark Mode: need I say more?

AI Research of the Week

Summary: Retinopathy of prematurity is an eye disease seen in premature babies that can potentially lead to visual impairment or blindness. A recent study found that an AI model accurately diagnosed retinopathy of prematurity by analyzing retinal images.

đź’ˇ Why does this matter?

  • Fills a need: early detection and treatment are crucial, yet there's a shortage of pediatric ophthalmologists, particularly in low and middle-income nations. This can model can assist those few heroes do more, faster.

  • Low technical debt: Remarkably, this AI model doesn't need coding expertise, making it suitable for resource-limited settings.

AI Opinion Piece of the Week

In this week’s rundown of AI opinions, we take a look at the spread of misinformation. Noah Smith, an Econs postdoc from UMich explores the challenges of combatting misinformation in our digital world.

So what is Smith saying?

Smith says our information ecosystem is a complex web of ideas, opinions, and yes, even conspiracy theories. It's like a wild jungle where misinformation can thrive and spread like wildfire. But there are multiple challenges to combating misinformation:

  • Brandolini's Law, also known as the Bullshit Asymmetry Principal (no, I didn’t make that up), states that debunking falsehoods requires significantly more effort than spreading them.

  • The free-rider problem, where debunking misinformation often lacks personal incentives, while spreading it can lead to financial or reputational gain.

  • The sheer volume of information available. It's like trying to find a needle in a haystack, except the haystack is a football stadium! With so much information bombarding us from all directions, misinformation can just slip through the cracks.

  • The way information is consumed and shared. Social media platforms have become breeding grounds for misinformation. It's like a game of Chinese whispers, where information gets distorted and twisted as it spreads from one person to another.

đź’ˇ How is AI helping to combat these challenges?
  • Perform linguistic analysis of textual content to differentiate computer-generated content from human-produced text, like a lot of AI Plagiarism checkers do in university contexts.

  • Reverse engineer manipulated images and videos to detect deep fakes and highlight content that needs to be flagged. LinkedIn revealed their AI image detector concept that achieves this a few days ago.

  • There are also projects that are using AI to fight misinformation such as Tattle Civic Technologies, a startup that’s trying to connect WhatsApp users to fact checks in real time.

  • Another example is the Rochester Institute of Technology, which is using $100,000 to build approaches for automatically detecting “deepfake” videos.

Maybe Reddit made hell for moderators because they think AI moderators are not too far away? Time (and a lot of smart people doing smart things) will tell.

That’s all for this week folks. Thanks for tuning in!

See you next week.

Cheers,
Your AI Sidekick