- Tomorrow Now
- Posts
- Microsoft partner with Meta 🦙, AI cover letter generator 📄, StackOverflow brings AI to play 🤖
Microsoft partner with Meta 🦙, AI cover letter generator 📄, StackOverflow brings AI to play 🤖
Edition #7
Hey!
This is Tomorrow Now.
And I’m your trusty AI Sidekick.
This week in AI:
Microsoft partners with Meta for Llama 2 release. But why?
StackOverflow responds to declining traffic with OverflowAI
Careered.ai: the fastest way to generate cover letters for free
Using low quality AI leads to better outcomes than high quality AI
Manipulated images/audio can inject malicious instructions to LLMs
As promised… no fluff, only stuff that (really) matters. And yes, that includes memes, duh!
AI Tweet of the Week
So Microsoft is partnered with OpenAI on their closed-source LLM and they're now partnering with Meta to release an open-source LLM with Llama 2?
I love that things are moving towards more open-source. I'm just really confused by where Microsoft is going with all this?
— Matt Wolfe (@mreflow)
4:54 PM • Jul 18, 2023
Summary: Meta released their latest open-source model, Llama 2, in partnership with Microsoft’s Azure platform. But Microsoft also offers OpenAI models and is a major investor in the company (they paid $14B for 49%). So, confused Matt asks, why would Microsoft partner with Meta, when it might undermine their investment in OpenAI?
đź’ˇAnswering the question:
Spreading the risk: OpenAI may have the first mover advantages, but this does not always last (e.g. Blackberry, Myspace, Yahoo). Microsoft is betting on AI but keeps the chips diversified on multiple players.
It’s beside the point: regardless of who Microsoft supports, their game is to attract all AI utilization on Azure. It's not about the tools but about the CPU/GPU cycles they can charge for. smart!
The real AI gangsta: Microsoft is sitting on the holy trinity of AI now.
Exclusive partnerships with top LLMs (OpenAI, Meta)
Priority access to Nvidia GPUs
And strategic assets like GitHub and Azure
AI Meme of the Week
I call this boomerangGPT
AI Business News of the Week
Summary: StackOverflow saw over 35% loss in traffic since ChatGPT. Site usage is also down with 50% fewer questions and answers. Many tweeted “RIP StackOverflow”. But the company responded with the announcement of OverflowAI, their latest generative AI search offering.
đź’ˇ Why does it matter?
It’s friendlier: The SO community isn’t always as forgiving to certain types of questions, especially if worded poorly. OverflowAI users can quickly benefit because they don't have to deal with potential community feedback hurdles.
Visual Studio extension: users can directly integrate OverflowAI into the developer environment to query, generate, summarize, and explain code, derived from information on the public forum.
AI Product of the Week
Summary: more of a tool than a product, Careered hosts a very fast, easy, and free cover letter generator. I have been using this tool since last year, and it’s still my go-to. It also has 160+ cover letter examples and tips on how to make a lasting impression for your specific role.
đź’ˇAll you have to do:
Paste a copy of the job listing
Paste a copy of your resume or LinkedIn
Get a cover letter you can save and use however you want.
AI Research of the Week
Summary: Prompt injection keeps getting wilder. Unlike text-based methods, you can't tell by looking at manipulated images or audio clips that it's malicious. It can be embedded in a website/email attachment and will alter the model's behavior if it processes it.
đź’ˇ Why does it matter?
Looks can be deceiving: if seemingly innocent images or audio can be injected with malicious instructions, AI enabled services just became the next best medium for mass scale misinformation (second only to FOX news, jk jk).
Affects open-source only: an important caveat is that this only works on open-source models (i.e. model weights are public) because these are adversarial inputs and finding them requires access to gradients.
AI Opinion Piece of the Week
Summary: In a field experiment, a Harvard researcher hired professional recruiters to evaluate resumes for interviews. The recruiters were randomly assigned AI assistants of varying quality to help them. Surprisingly, those with lower quality AI outperformed those with higher quality AI!
The high performing AI made recruiters overly reliant on its recommendations. But the lower quality AI kept recruiters more engaged and attentive, leading to better overall decisions.
đź’ˇ Why does it matter?
Just enough help: lower quality AI that keeps humans "in the loop" may be preferred in some situations over higher quality AI that leads to automation bias and disengagement.
Build for the context: AI developers should think beyond accuracy metrics to how their systems will actually be used in human contexts. "Optimized for collaboration" may be a better goal than pure accuracy.
Wake up human: warning mechanisms that alert low human effort could be built into AI systems to discourage disengagement.
It’s happened before: will AI do to thinking what navigation apps did to finding our own way around?
That’s all for this week folks, but before you go… I would be so grateful if you could reply to this email with a few words. I put in a lot of hours in research and writing, and would love to know if you like what you read.
Thanks for tuning in!
See you next week.
Cheers,
Your AI Sidekick