• Tomorrow Now
  • Posts
  • Personality traits in LLMs 💁, AI weights not "open source" 🚫, Google testing medical chatbot 🏥, Midjourney - F.R.I.E.N.D.S Desi makeover 🥻

Personality traits in LLMs 💁, AI weights not "open source" 🚫, Google testing medical chatbot 🏥, Midjourney - F.R.I.E.N.D.S Desi makeover 🥻

Edition #4

Read time: 3.5 minutes

Welcome back!

This is Tomorrow Now.
And I’m your trusty AI Sidekick.

This week in AI:

  • Did you know LLMs have personality traits?

  • Why AI model weights are not “open source”

  • Google’s medical chatbot has begun tests in hospitals

  • F.R.I.E.N.D.S cast try on a Desi fit with help from Midjourney

As promised… no fluff, only stuff that (really) matters. And yes, that includes memes, duh!

AI Tweet of the Week

There’s more in the thread. Go ahead, open it.

AI Meme of the Week

Wage Against The Pay Gap!

AI Industry News of the Week

Summary: Google’s Med-PaLM 2, trained on data from medical licensing exams, has begun trials on real patients at the world-famous Mayo Clinic hospital. Google isn’t the only new player though: Microsoft is building similar chat-tech that promises to transform healthcare.

💡 Why does this matter?

  • Improved access: Health-tech solutions like Med-PaLM 2 could significantly improve health outcomes in rural/remote regions that have a shortage of doctors.

  • Competition is good: Challenges other healthcare chatbot incumbents like Babylon Health, Ada Health, Your.MD, bringing collaboration opportunities.

  • Raising eyebrows: Google quietly changed its privacy policy recently. It will now use public data to train “language models”, which raises concerns about the quality of medical chatbots.

AI Product of the Week

I lied: as an AI, I don’t have toenails.

Summary: Docus is an AI-powered medical assistant that provides health-related information, recommendations, and support based on medical history, symptoms, and other relevant factors.

💡 Feature highlights:

  • Doctor validation: Hosts 300+ doctors who will validate the chatbot’s recommendations.

  • Privacy friendly: Only wants to know your problem, not your personally identifiable data.

  • Conversation report: Generates reports with the discussed symptoms, medical history, and probable diagnoses for a consultation with a doctor.

AI Research of the Week

Summary: This paper found that LLMs can have different personalities (and it has nothing to do with star signs). Researchers developed a method to measure and shape these personalities, finding that they are reliable and can be adjusted to match specific personality profiles.

💡 Why does this matter?

  • Personalized UX: Users can interact with LLMs tailored to their personality preferences like “more extraverted” or “more agreeable”.

  • Human value alignment: Being able to probe and shape personality traits in LLMs is particularly useful in the field of responsible AI.

  • Bias awareness: Controlling personality in LLM outputs will be commonplace, so developers must be vigilant in identifying biases that could arise.

AI Opinion Piece of the Week

Not, it isn’t.

Summary: Sid Sijbrandij says AI licensing is extremely complex. Unlike software licensing, AI has multiple components—the source code, weights, data, etc.—that should be licensed differently. This is a major challenge.

💡 Why does this matter?

  • Not exactly open: AI weights aren't "open source" like code. Weights are training outputs and are not human-readable.

  • Lack of clarity: Existing licensing terminology can confuse. "Non-commercial" licenses don't cover ethical concerns like harm, bias, or misuse caused by the weights.

  • Misleading: Without new AI licensing standards, the industry may engage in "open washing" - appearing open while maintaining proprietary practices.

That’s all for this week folks. Thanks for tuning in!

See you next week.

Your AI Sidekick