I think that it only seems like the human can exploit diplomacy because there is no effective policing of trustworthiness among the AI.
As it is right now, the AI doesn't prompt the human player for an evaluation of the human player's attitude towards that AI, although it conceivably could...an AI could pop up at the beginning of the turn (maybe every 10 turns)* and ask, "How do you feel towards me?" or "How are we getting along?" and the human player could select "friendly," "pleased," etc., and then that information would be circulated around to all of the AI in contact with that AI (just as the AI-AI relation information is circulated with the human player, and the human player could be held to that pledge. So, if you want to cultivate some better relations with a civ, you can profess your friendliness to them and maybe get a +1 or +2, but then if you close borders within, let's say, 10 turns you get a -2 with that civ for turning your back on them, AS WELL AS a -1 with all of the other civs, regardless of whether they care about that AI or not, just because you proved yourself untrustworthy. And god forbid if you declare war on this civ to whom you have recently professed your friendliness, then maybe you get like a -4 for backstabbing them, IN ADDITION to the usual -2 that you'd get for declaring war on them regardless of past relations (like, if you had said you were furious with them, then they'd be pissed that you declared war, but hardly surprised or feeling cheated), and IN ADDITION you'd get a -2 with all other civs for demonstrating untrustworthiness, even with civs who hate the AI that you declared war on, just because it shows that you can't be trusted. And all this would also be IN ADDITION to the -1 from civs allied with that civ for "You declared war on our friend!"
*Or probably a better method would be to simply have it start out as "cautious" as a default, and then it would be up to the human's prerogative to change it from them onward. I would say that it should take like 10 turns for the AIs to start judging you by your new rating of them, so that you couldn't just drop them from friendly to furious and declare war instantly without penalty. And if we were really smart with programming the AI, we would get the AI to react to these drops in rating by beefing up their defenses towards you.
There's nothing inherently asymmetrical about human vs. AI diplomacy. It's just that the AI currently doesn't ask our opinion and hold us to our word.
Now, currently the AI doesn't do any moral policing within its own ranks because it is unecessary. It is a bit like that movie, "The Invention of Lying" with Ricky Gervais. In a world where everyone tells the truth (the world of Civ4 AI-AI diplomacy), you don't have to worry about punishing liars or backstabbers. But I would, in fact, like to see the AI declare on me while at friendly OCCASIONALLY, and only when it has a great deal to gain (such as at the end of the game, or if I'm just a sitting duck with warriors in all of my cities). But NOT within the current Civ4 system, because there aren't any counterbalancing moral policing mechanisms to keep this from being exploited. You'd need to have the other AI take note of that AI's transgression ("-2 you are untrustworthy.") just like the human does, and the AI would have to have a way to factor in the importance of the diplo hits from the other AI relative to the importance of the goal (declaring war). Not a trivial task, of course.
In any case, the tendency to engage in this sort of backstab could of course depend on the leader, and either you could game the system by peeking in the XML files and seeing that Ghandi has a 0.1% chance of backstabbing at friendly, or you could not peek, or you could peek, but randomize personalities each game to keep you on your toes concerning who the untrustworthy ones in that game will be (because the AI doesn't know how untrustworthy you are...although conceivably we could have the game keep a running log of the human player's transgressions across games, and so if you are a prolific backstabber in previous games, when starting out with a new game you have an automatic -2 with all leaders due to "You have backstabbed in previous games!" Just like we get used to Catherine backstabbing. Unless you had "randomized personalities" selected, in which case the AI would disregard your trans-game log of backstabbings, just as you are ignorant of theirs for that game.
So that's that for human-AI diplomatic asymmetricality. It's not inherent. Theoretically it could be overcome. But it would take some complicated game-theory programming of the Civ4 AI, which, with its thick branching of possibilities, is a lot more difficult than the standard optimization routines and RNG generations it runs.
Now, as for having diplomacy not be so finely quantized, that's another question (and let's keep these two questions separate—human-AI diplomatic asymmetricality and diplomatic quantization).
The reality is, whether we display the information on the screen, the AI will be dealing in integer numbers at some level (as are humans at some fundamental level, quite possibly, although our routines are probably so complicated and poorly understood that it merely has the appearance of qualitative emotions, when in fact it might boil down to a certain quantity of circuits firing or a certain quantity of a certain neurotransmitter being released. And although it might seem ridiculous to reduce issues in the real world like "shared religion" and "mutual war" to some common currency, such that you could say that, for example, 2 shared wars = the effect of shared religion for diplomacy, perhaps that is just because our understanding of the real world is incomplete).
Now, the question of how humans make decisions in the context of incomplete information is fascinating. Is it deterministic at the deepest level, with incomplete judgment criteria being filled in by some quantum-scale random number generator type of stuff in the brain? Or do we really have something called "free will"? In any case, the AI needs an integer and/or an RNG in order to decide. Increasing the scope of the RNG (such as giving the RNG a small chance at getting the AI to declare at friendly) can make the AI's behavior seem less deterministic (and thus more humanlike), but also more arbitrary (and thus less humanlike) if taken to the extreme (which is perhaps evidence to support the idea that it is NOT just some quantum-RNG that fills in the gaps in the human brain).
In any case, there are going to be integers at work, whether we show them or not.
Now, if people really want to know what's going on, if you hide these integers from them, then they will surely figure out a round-about, (and more laborious and frustrating) way to procure these values. They could peek in the code and see, "Aha! Whenever you share a religion with Isabella for long enough, you get a +8!" or they could run trial-and-error tests of seeing, "Okay, with this particular combination of historical factors, that results in pleased when dealing with Darius, but with these other combinations it doesn't, so my best guess is that shared religion with him counts for four times as much as open borders." This is what we had to do back when I used to play Alpha Centauri. And it was not fun. It was downright maddening to not have any clue why Yang just dropped from "obstinate" to "belligerent," and we'd spend hours poring over the clues as if we were doing some tarot-card-reading exercise or something. (And at least Alpha Centauri did have a system of moral policing, such that your reputation suffered if you backstabbed or committed atrocities. I always loved being at "Reputation: wicked. Might: unsurpassed"...to be hated, but feared...ahhhhhhh!)
Consider also that Civ4 values are abstractions (such as grassland mines producing "3 hammers") used to make the game manageable. If we were actually trying to rule the world on our own, it would be more than a full-time job. It would involve hours of intimate conversation with foreign leaders so that we could gauge their feelings towards us. It would involve reading numerous assessment reports and whatnot. Reading numerous news stories. I do think that, through the course of these activities, one could begin to be able to make fine distinctions between a "friendly" ally who is just barely friendly, but who, if you refuse this one demand, will be knocked down to "pleased" and may not allow you to use its military bases for your upcoming invasion (such as Turkey vis-a-vis the U.S. before the Iraq War), and the "friendly" ally who is a few points higher, and whom you can occasionally





-slap without worrying about jeopardizing the relationship (Britain vis-a-vis the U.S.) I don't think it would be very difficult at all to assign "civ diplo values" to these civs at all. I'd put Turkey at +7 and Britain at +13. There, easy. Done.
Now, am I saying that I know their relationship with the U.S. well enough to gauge whether they would ever go to war with the U.S. in the near future? Well, in the case of Britain, I'd say, yeah, I really don't think there's any uncertainty at all. It is inconceivable that Britain would declare on the U.S. at this point. But Turkey? Well, I think Turkey might declare at pleased...in any case, I see no incongruence between the uncertainty that is still inherent in the civ integer abstractions of diplomacy (which includes the uncertainty, most often, of whether the civ will declare on you) and the uncertainty that is inherent in real world diplomacy.
Now, if we could just fix that human-AI asymmetry...
Edit:
Actually, thinking back on Alpha Centauri brings to my mind another problem with Civ4 diplomacy—it is not reciprocal in the same way that Alpha Centauri diplomacy was. In SMAC, you shared whatever diplo status you had with the AI (vendetta, truce, treaty, pact). It was an agreement that the other AI could hold you to. In Civ4, the dipo status aren't reciprocal, even among AI. One AI can be annoyed with another, and that other AI can only be at cautious with that first AI, and so on. The only continuous, reciprocal agreements that you make with AI are open borders, defensive pacts, and permanent alliances (and let's disregard PA's because those effectively end diplomatic considerations between the civs involved). And the Civ4 AI currently don't hold you to those agreements and rate down your reputation IN GENERAL when you violate them in a spectacular way (they rate you down if they or their friends are involved, but not also out of principle, whereas in SMAC that's exactly how the reputation component worked. In SMAC, in order to avoid a hit to your reputation (and thus more difficult future relations with all of the AI), you were SUPPOSED to let the truce expire (and never have signed a treaty or pact in the first place) and then attack. But nobody ever did that because the penalties for having a bad reputation still weren't harsh enough to make up for the fact that you just doubled the size of your faction).
Anyways, we don't even need any new human rating system of the AI in order to make it reciprocal. We just need for the existing continuous agreements to count for something. In other words, let's say for 10 turns after having cancelled open borders, you still have a "truce" with an AI that allows you to declare on them, but at the cost of appearing untrustworthy to the rest of the AI. And let's say for 10 turns after having cancelled a defensive pact (whether of your own accord or not), there also exists a grace period where attacks are allowed, but even more harshly judged. Now we just need to make open borders and defensive pacts count for more integer points, and other things for fewer integer points in order to rule out situations where a civ is friendly with you even though you've never opened borders with them, or maybe just have a stipulation that having open borders is a necessary, but not sufficient condition for being at friendly with a civ, and voila! The asymmetry problem more or less solved.