I guess what his fear is is that unregulated AI research could lead to catastrophic accidents and it's best to put the safeguards in first and risk slowing things down than to risk catastrophe.
You're right about the impact that overly broad regulations can have on fields like this, I struggle with this all the time personally. I just turned down a potential job on a stupid reality show precisely because an interviewer asked a red-flag question that potentially violated regulations. I'm quite positive she didn't mean to do anything wrong, but the regulations are so strict that it was instantly a nope.jpg moment.
At the same time, I can't help but wonder if some regulation or at least oversight is warranted. I know there are a lot of ad hoc and more formal controls on certain genetic and animal research. Would it not be appropriate to have that on AI as well? Even if companies are not say trying to develop strong AI (skynet), they could be developing algorithms and such that would have an extremely problematic social cost, like say enhancing racist policies or something like that. We need to get ahead of that kind of thing by at least having the government be aware of and oversee it, if not formally regulate it.
And something should also be done to counter the spread of Chinese surveillance systems which frequently depend on AI features to work - and this requires government leadership I believe.