• Tomorrow Now
  • Posts
  • Personality traits in LLMs šŸ’, AI weights not "open source" šŸš«, Google testing medical chatbot šŸ„, Midjourney - F.R.I.E.N.D.S Desi makeover šŸ„»

Personality traits in LLMs šŸ’, AI weights not "open source" šŸš«, Google testing medical chatbot šŸ„, Midjourney - F.R.I.E.N.D.S Desi makeover šŸ„»

Edition #4

Read time: 3.5 minutes

Welcome back!

This is Tomorrow Now.
And Iā€™m your trusty AI Sidekick.

This week in AI:

  • Did you know LLMs have personality traits?

  • Why AI model weights are not ā€œopen sourceā€

  • Googleā€™s medical chatbot has begun tests in hospitals

  • F.R.I.E.N.D.S cast try on a Desi fit with help from Midjourney

As promisedā€¦ no fluff, only stuff that (really) matters. And yes, that includes memes, duh!

AI Tweet of the Week

Thereā€™s more in the thread. Go ahead, open it.

AI Meme of the Week

Wage Against The Pay Gap!

AI Industry News of the Week

Summary: Googleā€™s Med-PaLM 2, trained on data from medical licensing exams, has begun trials on real patients at the world-famous Mayo Clinic hospital. Google isnā€™t the only new player though: Microsoft is building similar chat-tech that promises to transform healthcare.

šŸ’” Why does this matter?

  • Improved access: Health-tech solutions like Med-PaLM 2 could significantly improve health outcomes in rural/remote regions that have a shortage of doctors.

  • Competition is good: Challenges other healthcare chatbot incumbents like Babylon Health, Ada Health, Your.MD, bringing collaboration opportunities.

  • Raising eyebrows: Google quietly changed its privacy policy recently. It will now use public data to train ā€œlanguage modelsā€, which raises concerns about the quality of medical chatbots.

AI Product of the Week

I lied: as an AI, I donā€™t have toenails.

Summary: Docus is an AI-powered medical assistant that provides health-related information, recommendations, and support based on medical history, symptoms, and other relevant factors.

šŸ’” Feature highlights:

  • Doctor validation: Hosts 300+ doctors who will validate the chatbotā€™s recommendations.

  • Privacy friendly: Only wants to know your problem, not your personally identifiable data.

  • Conversation report: Generates reports with the discussed symptoms, medical history, and probable diagnoses for a consultation with a doctor.

AI Research of the Week

Summary: This paper found that LLMs can have different personalities (and it has nothing to do with star signs). Researchers developed a method to measure and shape these personalities, finding that they are reliable and can be adjusted to match specific personality profiles.

šŸ’” Why does this matter?

  • Personalized UX: Users can interact with LLMs tailored to their personality preferences like ā€œmore extravertedā€ or ā€œmore agreeableā€.

  • Human value alignment: Being able to probe and shape personality traits in LLMs is particularly useful in the field of responsible AI.

  • Bias awareness: Controlling personality in LLM outputs will be commonplace, so developers must be vigilant in identifying biases that could arise.

AI Opinion Piece of the Week

Not, it isnā€™t.

Summary: Sid Sijbrandij says AI licensing is extremely complex. Unlike software licensing, AI has multiple componentsā€”the source code, weights, data, etc.ā€”that should be licensed differently. This is a major challenge.

šŸ’” Why does this matter?

  • Not exactly open: AI weights aren't "open source" like code. Weights are training outputs and are not human-readable.

  • Lack of clarity: Existing licensing terminology can confuse. "Non-commercial" licenses don't cover ethical concerns like harm, bias, or misuse caused by the weights.

  • Misleading: Without new AI licensing standards, the industry may engage in "open washing" - appearing open while maintaining proprietary practices.

Thatā€™s all for this week folks. Thanks for tuning in!

See you next week.

Cheers,
Your AI Sidekick