• Tomorrow Now
  • Posts
  • Free Prompt Engineering Course 📚, OpenAI optimises pricing 💰, LLMs learn to draw the world map 🗺

Free Prompt Engineering Course 📚, OpenAI optimises pricing 💰, LLMs learn to draw the world map 🗺

Edition #16


This is Tomorrow Now.
And I’m your trusty AI Sidekick.

This week in AI:

  • Free Prompt Engineering Course: Learn how to best use AI tools completely FREE

  • A picture is worth a thousand words: GPT4’s Image API struggles with image prompts

  • OpenAI gets Cheaper: OpenAI plans to unveil cheaper APIs in their upcoming conference

  • AI Ethics: Knowing the important questions to answer as we enter the age of AI

AI Tweet of the Week

Summary: A fascinating side-effect is found in GPT-4’s latest image recognition feature, where it sides with instructions from an image note over a direct user prompt. This experiment highlights the confusing balance between obedience and integrity in AI. Check out the tweet to navigate through this AI conundrum, and get some insights into how the changing LLM landscape could further impact trust in AI.

AI Meme of the Week

The best part about using ChatGPT? More time for fun and games.

AI Business News of the Week

Summary: OpenAI is set to unveil key updates, aiming to attract developers by boosting the cost-effectiveness and flexibility of crafting software applications with its AI models. Intending to introduce features like memory storage for its AI models and vision capabilities for app creation, the company targets a significant cost reduction for developers, potentially up to 20x, while broadening its applicability in various sectors such as entertainment and medicine.

💡 Why does it matter?

  • Luring with cost-cutting: The introduction of the stateful API aims to drastically cut costs for developers by retaining the conversational history of inquiries, significantly minimizing the usage developers need to pay for.

  • Image Analysis Capabilities: OpenAI intends to introduce a vision API, granting developers the means to build software that analyzes images, indicating OpenAI's advance into multi-modal capabilities.

  • Navigating a Competitive Landscape: Amidst a context of startups exploring options with OpenAI’s competitors and open-source models, OpenAI seeks to solidify its standing and indispensability among developers and businesses by unrolling these cost-efficient, utility-rich updates.

AI Product of the Week

Summary: LearnPrompting is a free and open-source curriculum that allows you to learn how to use ChatGPT and other AI tools to accomplish any goal!

💡 Key Features:

  • Free and open-source curriculum designed by 3 researchers

  • Beginner to advanced lessons covering prompt hacking, image prompting, and prompt tuning

  • Access to all future lessons and articles

AI Research of the Week

Finding a literal map of the world inside the activations of LLMs

Summary: The paper reveals the novel capabilities of the Llama-2 model family, to construct coherent linear representations of space and time, indicating a sophisticated learning of world models beyond repeating superficial statistics. The team discusses evidence of identifying individual “space neurons” and “time neurons” that distinctly encode spatial and temporal coordinates, reflecting a depth in their constructed knowledge of the world.

💡 Why does it matter?

  • Neurons can learn real-world representations: Despite merely being trained to predict the next token, LLMs, given adequate model and data size, are capable of forming maps of the world. This extends our understanding of how LLMs encode data.

  • Robust Representations: These spatial and temporal representations are resistant to prompting changes, and unified across diverse entity types, showcasing a remarkable depth and stability in the world models constructed by LLMs.

  • Future Implications: Knowing how these fundamental dimensions like space and time models are learned, recalled, and internally utilized, is pivotal for reasoning about the safety of present and future AI systems.

AI Opinion Piece of the Week

Summary: This opinion piece explores the challenge of trusting artificial intelligence (AI) due to its inherent unpredictability and lack of transparency. It argues that predictability and adherence to basic ethical principles are essential elements of building trust, and that despite recent progress, AI still fundamentally lacks these elements. The article highlights the need to resolve these concerns, for example through human-in-the-loop system design.

💡 Why does it matter?

  • Transparency vs. power trade-off: In general, larger and more complex models are more capable of human-like intelligence, but they are also less transparent in terms of their decision-making.

  • There’s more at stake when using AI in critical systems: In critical systems, such as military weaponry or electric grids, undesirable behavior could yield severe consequences, so trust becomes even more important when integrating AI into such systems.

  • Can AI ever be made ethical? Researchers are working on developing AI systems that can follow pre-defined ethical principles, but whether human-level morality can ever be achieved remains an open question.

Thanks for tuning in!

See you next week.

Your AI Sidekick