Here are a couple of noteworthy posts I saw this week and several podcasts I heard.
- Prompt: A Useful, Flexible Format
- Use Case: Derived Generated Writing
- Use Case: Historical Document Analysis and Simplification
- Podcasts: Two Recommendations from This Week
- Humans vs. Machines with Gary Marcus 🤖: “And the winner is…Watson!” 🏆
- Last Week in AI 🧠 (segment at 44:00): “Auto-GPT and BabyAGI: How ‘autonomous agents’ are bringing generative AI to the masses” 🤖
- Video: Interview: Adam Conover: A.I. and Stochastic Parrots | FACTUALLY with Emily Bender and Timnit Gebru
Prompt: A Useful, Flexible Format
My most frequent initial prompt is fairly similar to this helpful graphic I saw on Twitter this week. My most-used prompts for various topics usually take the general form:
PROMPT: Assume the role of a professor of [field] with a specialty in [sub-field]. Find below a [text of some sort]. Create a [task] in the form of a [format].
My most commonly used response formats are markdown tables (which copy-and-paste nicely into word processors) and bullet lists (which are often a next set of prompts); lesser used are hierarchical outlines and code windows.
Use Case: Derived Generated Writing
Great use case from Denys Allen, PA Ancestors. Love the emphasis on having ChatGPT *process* information rather than gathering information (which WILL burn you), and generating text based on *given* (NOT gathered) information.
Understanding the difference between instructing chatbots to PROCESS information rather than GATHER information is vital in spring 2023 to their successful use. Too frequently we inadvertently ask ChatGPT to gather, find, or search for information without realizing that doing so is inviting the large language model to inject fiction and hallucinations into the response, because that is their nature. The antidote is to constrain and restrict the chatbot by instructing it to work ONLY on the information you provide it.
Use Case: Historical Document Analysis and Simplification
In the Genealogy and Artificial Intelligence group at Facebook, Rand Hall shared an interesting use case: Historical Document Analysis and Simplification. In this use case, the AI system is tasked with analyzing and summarizing a historical document, converting it into contemporary standard English, and comparing different summaries by extracting salient points. This involves understanding historical language and context, as well as transforming the text while maintaining its original meaning. The AI system’s capabilities can assist genealogists and researchers in interpreting, comparing, and understanding historical texts more effectively and efficiently.
PROMPT: Assume the role of a historian, linguist, and editor. Find below an 1882 Letter to the Editor. First, summarize the Letter. Then, rewrite the Letter in simple contemporary standard English, while prioritizing fidelity to the meaning of the original Letter.
Podcasts: Two Recommendations from This Week
While mowing the lawn this week, I listened to several great podcasts; these are the best two. The first, an episode of a new show from a familiar host, and the second, the best segment this week, a short introduction to “autonomous agents.”
1️⃣ Podcast: Humans vs. Machines with Gary Marcus 🤖🏆
Episode: S04E01: And the winner is…Watson!
Steve’s Note: Polished and produced, this podcast is beginner and novice-friendly, and serves an antidote to the hype and hustle surrounding AI today; this is the first episode of a limited series.
Summary: The first episode of the fourth season of Humans vs Machines podcast is titled “And the winner is…Watson!” hosted by Gary Marcus. The episode talks about IBM’s Watson’s defeat of Ken Jennings on Jeopardy! and how it became one of artificial intelligence’s most dramatic triumphs. The podcast also discusses AI’s impact on humanity and its potential to change the world. The episode features interviews with David Ferrucci, an AI researcher who led the Watson project, and Ken Jennings of Jeopardy!
2️⃣ Podcast: Last Week in AI 🤖🧠
Episode: #119: Open Source GPTs, X.AI, Auto-GPT, China’s Censorship of AI
Segment (timestamp: 44:00 to 53:00): Auto-GPT and BabyAGI: How ‘autonomous agents’ are bringing generative AI to the masses
Steve’s Note: This podcast is closer to the classic two techies talking format; I found the hosts very smart and informative; this 9-minute segment is the most interesting I heard this week, introducing a topic that is likely to become more important in the weeks and months to come; you can try one of these autonomous agents at https://agentgpt.reworkd.ai/.
Summary: Autonomous agents are software programs that utilize large language models like GPT-4 to automate and simplify tasks such as research, code writing, and business management. Notable examples include BabyAGI and Auto-GPT, which offer various features and functionalities. Despite their potential, these agents face challenges in maintaining focus, predictability, safety, and reliability. They also raise ethical and social concerns regarding AI operating without human supervision. Nevertheless, autonomous agents represent progress towards artificial general intelligence (AGI), where AI systems can think and act like humans.
Video: Interview: Adam Conover: A.I. and Stochastic Parrots | FACTUALLY with Emily Bender and Timnit Gebru
Large language models are unmoored from reality, so I appreciate experts who can offer a grounding perspective.
Adam Conover, host of “Adam Ruins Everything,” may not be everyone’s cup of tea, but I personally enjoy his work. In this excellent hour-long podcast/YouTube video, he interviews two important AI experts: Emily Bender and Timnit Gebru. The conversation doesn’t present a balanced, centrist perspective, but it offers a valuable skeptical viewpoint. This is a great listen while doing other things, like driving or folding laundry, but it also deserves your full attention if you are deeply interested in the topic.
Title: Adam Conover: A.I. and Stochastic Parrots | FACTUALLY with Emily Bender and Timnit Gebru
Any time a linguist such as Emily Bender gets a spotlight, I cheer, and ten-fold for computational linguists. Timnit Gebru was famously fired from Google for too loudly calling attention to bias in their training data and systems. I hope these voices are more widely heard.