Tam Hunt is a lawyer and activist based on the Big Island. He is co-founder of Think B.I.G. and a board member for the Hawaii Electric Vehicle association.
A new state office of AI Safety and Regulation could take a risk-based approach to regulating various AI products.
Not a day passes without on the great strides being made on artificial intelligence 鈥 and warnings from industry insiders, academics and activists about the potentially very serious risks from AI.
A of AI experts聽found that 36% fear that AI development may result in a 鈥渘uclear-level catastrophe.鈥 Almost 28,000 people have signed on to 聽written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.
As a public policy lawyer and also a researcher in consciousness (I have a part-time position at UC Santa Barbara鈥檚 META Lab I share these strong concerns about the rapid development of AI, and I am a co-signer of the Future of Life open letter.
Why are we all so concerned? In short: AI development is going way too fast and it鈥檚 not being regulated.
‘Rapid Acceleration’
The key issue is the profoundly rapid improvement in the new crop of advanced “chatbots,” or what are technically called 鈥渓arge language models鈥 such as ChatGPT, Bard, Claude 2, and many others coming down the pike.
The pace of improvement in these AIs is truly impressive. This 聽promises to soon result in 鈥渁rtificial general intelligence,鈥 which is defined as AI that is as good or better on almost anything a human can do.
When AGI arrives, possibly in the near future but possibly in a decade or more, AI will be able to聽improve聽itself with no human intervention. It will do this in the same way that, for example, learned in 2017 how to play chess better than even the very best human or other AI chess players in just nine hours from when it was first turned on. It achieved this feat by playing itself millions of times over.
In testing GPT-4, it performed better than聽90%聽of human test takers on the , a standardized test used to certify lawyers for practice in many states. That figure was up from just 10% in the previous GPT-3.5 version, which was trained on a smaller data set. They found similar improvements in dozens of other standardized tests.
Most of these tests are tests of reasoning, not of regurgitated knowledge. Reasoning is perhaps the hallmark of general intelligence so even today鈥檚 AIs are showing significant signs of general intelligence.
This pace of change is why AI researcher Geoffrey Hinton, formerly with Google for a number of years, told the New York Times: “Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That鈥檚 scary.鈥
In a mid-May Senate hearing on the potential of AI, Sam Altman, the head of OpenAI called regulation 鈥渃rucial.鈥 But Congress has done almost nothing on AI since then and the White House recently issued a letter applauding a purely voluntary approach adopted by the major AI development companies like Google and OpenAI.
A voluntary approach on regulating AI safety is like asking oil companies to voluntarily ensure their products keep us safe from climate change.
With the 鈥淎I explosion鈥 underway now, and with artificial general intelligence perhaps very close, we may have just one chance to get it right in terms of regulating AI to ensure it鈥檚 safe.
I鈥檓 working with Hawaii state legislators to create a new Office of AI Safety and Regulation because the threat is so immediate that it requires significant and rapid action. Congress is working on AI safety issues but it seems that Congress is simply incapable of acting rapidly enough given the scale of this threat.
The new office would follow the in placing the burden on AI developers to demonstrate that their products are safe for Hawaii before they are allowed to be used in Hawaii. The current approach by regulators is to allow AI companies to simply release their products to the public, where they鈥檙e being adopted at record speed, with literally no proof of safety.
The new Hawaii office of AI Safety and Regulation would then take a risk-based approach to regulating various AI products. This means that the office staff, with public input, would assess the potential dangers of each AI product type and would impose regulations based on the potential risk. So less risky products would be subject to lighter regulation and more risky AI products would face more burdensome regulation.
My hope is that this approach will help to keep Hawaii safe from the more extreme dangers posed by AI 鈥 which another recent open letter, signed by hundreds of AI industry leaders and academics, warned should be considered as dangerous as nuclear war or pandemics.
Hawaii can and should lead the way on a state-level approach to regulating these dangers. We can鈥檛 afford to wait for Congress to act and it is all but certain that anything Congress adopts will be far too little and too late.
Sign up for our FREE morning newsletter and face each day more informed.
Community Voices aims to encourage broad discussion on many
topics of
community interest. It鈥檚 kind of
a cross between Letters to the Editor and op-eds. This is your space to talk about important issues or
interesting people who are making a difference in our world. Column lengths should be no more than 800
words and we need a photo of the author and a bio. We welcome video commentary and other multimedia
formats. Send to news@civilbeat.org. The opinions and
information expressed in Community Voices are solely those of the authors and not Civil Beat.
Tam Hunt is a lawyer and activist based on the Big Island. He is co-founder of Think B.I.G. and a board member for the Hawaii Electric Vehicle association.
Am i wrong to say it's already to late?I see it as comparable to gun control... You can regulate gun control, but the compliant become victims of the parties who do not. The entities (worldwide) who would limit the advancement of AI/AGI in favor of safety management would only fall behind the worldwide entities that would propagate it's advancement for their own directives.We can attempt control at this point, but regulation limited to "willing" participants is moot.We can't take guns out of the hands of the bad guys.
Paintblush·
1 year ago
Humans are smart enough to invent AI, but not smart enough to figure out where it will go. The potential for harm is great. Tam Hunt is right. We should apply the precautionary principle.
sleepingdog·
1 year ago
Sorry, wrong state. It is California not Hawaii that should be taking the lead on this. If the feds are unable to regulate this field, have states do it but using something like the Uniform Commercial Code as a model. Hawaii should not try to do this on its own.Our governor recently had to declare a state of emergency for the housing market because it is so hopelessly over regulated. Do you really want to do the same for scientific progress?So were this implemented, researchers and developers in Hawaii would be unable to use the latest tools in their work. Researchers in areas like climatology, genetics, cancer research, chemistry and many others would be barred from using the latest tools until:Hawaii is too small and would not be able find bureaucrats competent enough to understand what they would be regulating. Thus these bureaucrats would not be able respond quickly, it could take months or more likely years for researchers to be granted permission to use best tools.Have you tried to get a building permit lately?
IDEAS is the place you'll find essays, analysis and opinion on public affairs in Hawaii. We want to showcase smart ideas about the future of Hawaii, from the state's sharpest thinkers, to stretch our collective thinking about a problem or an issue. Email news@civilbeat.org to submit an idea.