I'm not sure this truly specific thought holds up though. We have spent a great deal of effort to give AI-equipped self-driving cars with extrasensory perceptions akin to echolocation and have long since given ourselves that exact ability in the guise of sonar and fish sensors. While I recognize and accept the premise that there is a desire to have AI be like us, I do not think that the predominant thrust of research is to totally replicate ourselves or our abilities. Rather, I think that from the outset the working assumption is that AI will surpass us on all fronts and it is actually the development and widespread adoption of narrowly-focused, task-oriented AI (which is quite unlike us) that has been a great cultural shock. I point back to deep fakes and surveillance applications for AI which were likely foreseen by the AI community but caught the broader culture off guard.
Don't get me started on self-driving cars. I may need a dedicated thread just for that (and / or AI-in-engineering in general)
Akin to echolocation, sure. The underlying principle is similar (but also not). But we don't consult marine biologists when building such technology (as least, not so far as I can Google). We don't understand what it's used
for. We just see the numbers. The technical need for such a technology (also, to avoid lawsuits. Can't stress that reason enough, because ugh). Radar operators in the infancy of radar understood this better than any self-driving car technology driver / creator seems to. It's a bit unfair to be picking on self-driving cars (and again - probably best in-depth for another thread) when it's symptomatic of the greater (admittedly capitalistic) industry that drives ideas for profit.
I agree on the dissonance, though I think it goes deeper - there are a lot of intersectional critiques of the
application of AI (more than the theory) based on exclusion for race, gender, class, and so on. Even something as simple ("simple") as an automated London Tube system gets caught in the perpetual loop of "the drivers work stupid hours and never see their families, we need to do something about this -> but what jobs would the drivers do -> how about UBI -> UBI is socialist -> automated trains just aren't going to work out". The barriers are more cultural (slash reinforced by class and modern conservative ideology). Applied AI
could surpass us on all fronts, but I don't ever believe it'd do it all at once. SKYNET (listen okay I know it's fictional but bear with me) isn't one thing - it's many things, all at once. It's general theory, applied to specific problems, pulled together under one cohesive framework.
Also, and again kinda going back to engineering, but there's a lot that took us by surprise (and I agree it did, culturally as well as professionally) because so many companies do jack **** with the platform and responsibilities they have. Twitter can absolutely annihilate accounts for specific phrasings of queer jargon, or for the vaguest threat of violence towards a popular (often verified) account, but they turn around and go "well we can't ban the Neo-Nazis because it's a technical challenge". The closest we got was when somebody leaked (or spoke off the record, I can't remember) that they can't ban people for white supremacy on the platform because if they enforced the rule unilaterally they'd implicate a sitting US senator (which is a whole other bucket of yikes I don't want to taint this interesting discussion on AI with).
The application of AI is a very complicated mess, which doesn't help us here

And I'm no way in touch with what the brightest minds of our time on AI are actually theorising. I'm definitely more of a software engineer than a computer scientist, haha. But I think we phrase the subject the same way we phrase (human) intelligence; comparing everything else to
us. Yeah, there are some neat bits we can nick or otherwise appropriate from other species, but that's simply to enhance what we already possess ourselves. To bring it back to self-driving cars a bit, they're programmed by humans, and possess human bias as a consequence (nevermind the detection technology
that can reportedly fail to recognise darker skin tones).
While typing that, that made me think. Does AI suffer from the same flaw that a lot of technology does in that we (generally, not us in this thread) see it as inherently better, or more pure than us? We have a lot of science fiction on AI seeing humanity's horror and turning against us, but the moral tends to overwhelmingly be "but human love overpowers in the end" (apart from the really bleak stuff, hah). How do we deconstruct that? Can we?