I’ve been an enthusiastic explorer of artificial intelligence-assisted genealogy for the past several months. My 35-plus year interests in linguistics and language, computers and programming, and genealogy and family history converged in November 2022 with the release by OpenAI of ChatGPT to create new possibilities like a supernova creates new elements such as gold, silver, and uranium: valuable and potentially dangerous. ChatGPT shattered records for growing a user base, with one million users in five days, and 100 million users in two months. A fevered OpenAI release schedule has only fueled excitement (not to mention Microsoft’s Bing Chat, Google’s Bard, and other AI systems):
- November 30, 2022: ChatGPT
- February 1, 2023: ChatGPT Plus
- March 1, 2023: ChatGPT and Whisper APIs
- March 15, 2023: GPT-4
- March 24, 2023: ChatGPT Plugins, including Wolfram
I was glad to hear in late January 2023 of the creation of the Genealogy and Artificial Intelligence group at Facebook. It is a great place to share discoveries and trade tips and tricks with AI-interested genealogists. One of the most valuable contributions of the group, however, has been a pleasant surprise.
In the wider genealogical community and in greater culture, the range of reactions and responses that I have seen to the increase of AI-related products, press releases, and news articles run quite a spectrum, from negative to positive; a few of the infinite points along that spectrum might include:
- Catastrophizing
- Pearl-clutching
- Anxiety
- Indifference
- Curiosity
- Cautious optimism
- Enthusiasm
- Irrational exuberance
An unexpected gift of the Genealogy and Artificial Intelligence group has been an observable adherence to the group’s “About” statement: “We want to help genealogists harness the power of AI while understanding both the benefits and limitations of new AI-based technologies” (emphasis mine). My surprise is not that the group would follow their own guidelines, but that I’d enjoy thinking about the limits of the new AI-based technologies as much as I have.
I’m a self-confessed enthusiast, having great optimism that AI is going to be powerfully useful to the genealogy community–every week I’m thinking of two or three new ways we’ll be able to put these new technologies and tools to work for us. I find experimenting with AI tools and creating OpenAI API-empowered Python scripts to be both fun and useful. I enjoy reading about developments in the field, and I enjoy hearing what people are saying about AI each day.
My unscientific observation, too, is that the self-selecting Genealogy and Artificial Intelligence group also tends toward the curious, optimistic, and enthusiastic band of the spectrum.
There are two or three senses or ways that I’ve found thinking about limits to be rewarding. First, to be of use to genealogists, we have had to work to constrain, restrict, and limit the tendency of the large language AI models of winter 2022-2023 to hallucinate. Severely constrained, controlled, and limited nuclear fission can be useful; unconstrained and uncontrolled it is beyond dangerous.
Second, useful limits apply not just to AIs but to ourselves. To keep myself grounded in enthusiasm and not drifting into irrational exuberance, I’ve found it exceedingly useful to have found some trusted critics and skeptics, experts far more knowledgeable than me (and that’s an understatement). The two critics that I most value are Gary Marcus and Grady Booch, both prominent experts in the field. If they issue a word of caution or concern, I pay attention. And if they state PR claim is bull, I give them the benefit of the doubt until I’ve seen otherwise.
Which brings me to my third, last, and perhaps most exciting way that limits are interesting. The limitations of AI-technologies are not immutable laws of physics; rather, they are statements about the state of what is possible today. In linguistics, there is a saying, “The map is not the territory” (in the sense that the word chair is not the same as a physical chair). Similarly, statements about the limits of AI are like roughly sketched maps of a territory that is undergoing rapid change due to accelerated plate tectonics, volcanic build-up, earthquakes, erosion, and human terraforming. Simultaneously.
Today’s AI limits are tomorrow’s growing edges, where breakthroughs will happen in time, and where the foundations of tomorrow’s science and engineering will be laid.
So when one of your trusted critics acknowledge an advancement, that’s something to be celebrated. We’ve had a couple of those lately.
And that’s cool, too.
PS: A request: my list of trusted AI critics and skeptics is far, far too short. If you have a trusted critic or skeptic, I would welcome the recommendation.