X

Why My Wife and I Decided to Write a Book Using ChatGPT

October 3, 2024

Toward the end of 2022, ripples of ChatGPT-generated buzz began to propagate through my network of technical colleagues. The whisper was, “This is new. This is something.” I dutifully set myself up with a ChatGPT account, and like almost everyone who tries modern AI for the first time, my head began to fill with questions and concerns. I was also impressed.

It is true that there are future existential threats that humanity will need to navigate, and which I will touch on below, but my existential fears were more immediate: I had recently published my third book on AI and I began to wonder, “Was that my last book?  Has AI overtaken humanity in the realm of creative writing, and nudged me off the top of the food chain?”

I shared my concerns with my wife, Theresa Hart, who is an entrepreneur and attorney, but rather than fret, we hatched a plan. We began work on a new AI book, to be written entirely by ChatGPT and illustrated by an AI service called Night Café. Thus, “ChatGPT, An AI Expert, and a Lawyer Walk Into a Bar…The History of Creativity and Communication” was born. The rules were simple: We could enter any prompt we liked, but we could not directly edit the resulting text.

Testing ChatGPT’s early capabilities

We explored the limits of that version of ChatGPT, we asked dumb questions, we asked evil questions, we asked ChatGPT to write jokes and to entertain us. What we found was a tool that went far beyond any previous technology that had ever existed in the areas of language creation, and in many cases we observed behavior that mimicked creativity. We also found that ChatGPT had limitations that prevented it from always getting facts right, and we noted that it could not form an opinion or argue a point for the purposes of making a case or telling a story.

In short, ChatGPT was a powerful tool for augmenting humans, but not yet one that could replace them. I began to think of Generative AI (“GenAI”) as more of a colleague than as a competitor. Phew!

Fast forward almost two years…We have seen explosive growth in the development and propagation of new GenAI models: multiple versions of GPT and ChatGPT, Claude, Gemini, and Llama, just to name a few. Of particular note are the multimodal GenAI models that can generate images, video, and audio from text, and vice versa, because these technologies have been used by criminals to scam companies and individuals out of millions of dollars and they have been used to manipulate the public with false imagery.

These developments are generating excitement, but also reigniting our original existential fears. Will my job be displaced by AI? Will I be scammed by an AI? Will AI take over the world? What legislation should be put in place to protect human society from AI?

This last question is particularly timely: Governor Gavin Newsom of California just vetoed some broad legislation aimed at regulating AI and making companies developing AI accountable for some misuse of their tools, and requiring specific features to be incorporated into AI systems by developers. 

I can see on social media that the majority of my high-tech colleagues are celebrating this veto as a victory in the name of speed to market, innovation, and fewer regulatory hurdles to develop products. I personally would assert that it is fortunate that this legislation did not pass because it arose out of fearful, somewhat uninformed thought processes and was not likely to result in beneficial, meaningful improvements in the way that we develop and deploy AI.

Thoughtful AI legislation requires discussion across disciplines

I also know for a fact that leaders in a variety of disciplines are strongly in favor of thoughtful legislation that will help ensure protections for consumers and workers without putting unrealistic expectations on the folks who are developing new technology.  I was very fortunate to participate in a private, round table discussion with a dozen well-informed and connected leaders from Silicon Valley, Texas and Washington, DC, covering areas of expertise as broad as quantum computing, robotics, healthcare, governance and policy, venture capital, and international tech journalism. Frankly, it was such a who’s-who, that I had to pinch myself that I was included in the cast.

What I experienced was a balanced discussion coming from a diverse group of perspectives, all converging on a few key ideas: “bad” regulation will hurt innovation and potentially negatively impact national security; “good” regulation is needed to ensure transparency about the dangers and limitations of the technology that is being developed so rapidly right now; concerns about job loss in some sectors are real, so incentives for upskilling employees as automation of their jobs increases will be necessary, but using regulation to slow down deployment of new automation technology is not a good idea.

Bad regulation in this context tended to be specific requirements on what an AI system could do, how it was developed, requirements to expose the details of data used to train AI, and importantly, direct accountability for misuse of tools by third-party evildoers.

Good regulation tended to include labeling, so that users know they are interacting with AI or experiencing AI-generated content, increased transparency around the potential risks and shortcomings of AI output, and expansion of existing rules to ensure that using AI to scam or manipulate people is punished vigorously.

I was impressed by the overall sentiment from this group of technical and tech-adjacent leaders, that their role is not to resist regulation at all cost, but to provide informed input to lawmakers that will ensure that we end up with sensible regulation that is good for society as a whole.

AI tool developers: Inform the public, lawmakers about capabilities and risks

Getting back to the original subject of this article, it is crucial that the people who are developing these tools, and those who are using these tools on a daily basis to create value, continue to inform the public and lawmakers about what the real capabilities of the latest GenAI systems are, where there are known risks and design flaws, and what we might realistically need to prepare for in the next one to two years. Beyond this timeframe, the future is not knowable, so we also need to maintain some flexibility in our approach to AI regulation. I hope that the discussion I participated in, which was intended to generate real inputs to lawmakers, is being mirrored by many others in tech who can really speak to the strengths, weaknesses and evolving abilities of modern GenAI.

One of the most distinguished members of the group pointed out that it is hard to fix law once it is in place. He cited the HIPAA law that was designed as an initial attempt to protect healthcare patient data privacy in anticipation of an increasingly connected world. The law was written in 1996, before the Internet, before mobile phones, before social media, and is now painfully outdated, and yet, despite these shortcomings it has not really changed since 1996. So we need to be flexible in our approach to AI legislation.

As I reflect on all of this, I wonder if perhaps it’s time for Theresa Hart and me to get together with ChatGPT to write another book. This one will be about AI safety, societal impact, and the quest to create beneficial AI legislation. Maybe it will be called, “ChatGPT, an AI Expert, and a Lawyer Walk into Congress…”

Toward the end of 2022, ripples of ChatGPT-generated buzz began to propagate through my network of technical colleagues. The whisper was, “This is new. This is something.” I dutifully set myself up with a ChatGPT account, and like almost everyone who tries modern AI for the first time, my head began to fill with questions and concerns. I was also impressed.

It is true that there are future existential threats that humanity will need to navigate, and which I will touch on below, but my existential fears were more immediate: I had recently published my third book on AI and I began to wonder, “Was that my last book?  Has AI overtaken humanity in the realm of creative writing, and nudged me off the top of the food chain?”

I shared my concerns with my wife, Theresa Hart, who is an entrepreneur and attorney, but rather than fret, we hatched a plan. We began work on a new AI book, to be written entirely by ChatGPT and illustrated by an AI service called Night Café. Thus, “ChatGPT, An AI Expert, and a Lawyer Walk Into a Bar…The History of Creativity and Communication” was born. The rules were simple: We could enter any prompt we liked, but we could not directly edit the resulting text.

Testing ChatGPT’s early capabilities

We explored the limits of that version of ChatGPT, we asked dumb questions, we asked evil questions, we asked ChatGPT to write jokes and to entertain us. What we found was a tool that went far beyond any previous technology that had ever existed in the areas of language creation, and in many cases we observed behavior that mimicked creativity. We also found that ChatGPT had limitations that prevented it from always getting facts right, and we noted that it could not form an opinion or argue a point for the purposes of making a case or telling a story.

In short, ChatGPT was a powerful tool for augmenting humans, but not yet one that could replace them. I began to think of Generative AI (“GenAI”) as more of a colleague than as a competitor. Phew!

Fast forward almost two years…We have seen explosive growth in the development and propagation of new GenAI models: multiple versions of GPT and ChatGPT, Claude, Gemini, and Llama, just to name a few. Of particular note are the multimodal GenAI models that can generate images, video, and audio from text, and vice versa, because these technologies have been used by criminals to scam companies and individuals out of millions of dollars and they have been used to manipulate the public with false imagery.

These developments are generating excitement, but also reigniting our original existential fears. Will my job be displaced by AI? Will I be scammed by an AI? Will AI take over the world? What legislation should be put in place to protect human society from AI?

This last question is particularly timely: Governor Gavin Newsom of California just vetoed some broad legislation aimed at regulating AI and making companies developing AI accountable for some misuse of their tools, and requiring specific features to be incorporated into AI systems by developers. 

I can see on social media that the majority of my high-tech colleagues are celebrating this veto as a victory in the name of speed to market, innovation, and fewer regulatory hurdles to develop products. I personally would assert that it is fortunate that this legislation did not pass because it arose out of fearful, somewhat uninformed thought processes and was not likely to result in beneficial, meaningful improvements in the way that we develop and deploy AI.

Thoughtful AI legislation requires discussion across disciplines

I also know for a fact that leaders in a variety of disciplines are strongly in favor of thoughtful legislation that will help ensure protections for consumers and workers without putting unrealistic expectations on the folks who are developing new technology.  I was very fortunate to participate in a private, round table discussion with a dozen well-informed and connected leaders from Silicon Valley, Texas and Washington, DC, covering areas of expertise as broad as quantum computing, robotics, healthcare, governance and policy, venture capital, and international tech journalism. Frankly, it was such a who’s-who, that I had to pinch myself that I was included in the cast.

What I experienced was a balanced discussion coming from a diverse group of perspectives, all converging on a few key ideas: “bad” regulation will hurt innovation and potentially negatively impact national security; “good” regulation is needed to ensure transparency about the dangers and limitations of the technology that is being developed so rapidly right now; concerns about job loss in some sectors are real, so incentives for upskilling employees as automation of their jobs increases will be necessary, but using regulation to slow down deployment of new automation technology is not a good idea.

Bad regulation in this context tended to be specific requirements on what an AI system could do, how it was developed, requirements to expose the details of data used to train AI, and importantly, direct accountability for misuse of tools by third-party evildoers.

Good regulation tended to include labeling, so that users know they are interacting with AI or experiencing AI-generated content, increased transparency around the potential risks and shortcomings of AI output, and expansion of existing rules to ensure that using AI to scam or manipulate people is punished vigorously.

I was impressed by the overall sentiment from this group of technical and tech-adjacent leaders, that their role is not to resist regulation at all cost, but to provide informed input to lawmakers that will ensure that we end up with sensible regulation that is good for society as a whole.

AI tool developers: Inform the public, lawmakers about capabilities and risks

Getting back to the original subject of this article, it is crucial that the people who are developing these tools, and those who are using these tools on a daily basis to create value, continue to inform the public and lawmakers about what the real capabilities of the latest GenAI systems are, where there are known risks and design flaws, and what we might realistically need to prepare for in the next one to two years. Beyond this timeframe, the future is not knowable, so we also need to maintain some flexibility in our approach to AI regulation. I hope that the discussion I participated in, which was intended to generate real inputs to lawmakers, is being mirrored by many others in tech who can really speak to the strengths, weaknesses and evolving abilities of modern GenAI.

One of the most distinguished members of the group pointed out that it is hard to fix law once it is in place. He cited the HIPAA law that was designed as an initial attempt to protect healthcare patient data privacy in anticipation of an increasingly connected world. The law was written in 1996, before the Internet, before mobile phones, before social media, and is now painfully outdated, and yet, despite these shortcomings it has not really changed since 1996. So we need to be flexible in our approach to AI legislation.

As I reflect on all of this, I wonder if perhaps it’s time for Theresa Hart and me to get together with ChatGPT to write another book. This one will be about AI safety, societal impact, and the quest to create beneficial AI legislation. Maybe it will be called, “ChatGPT, an AI Expert, and a Lawyer Walk into Congress…”

Subscribe to TechArena

Subscribe