Problems involving excessive friendliness and inappropriate DP selection arose from the way in which approach and friendship/enemy willingness functions were interdependent on each other.
Essentially, it was an apples-to-oranges comparison where the gradual curve made AI approaches change slowly, but friendship/enemy willingness could change rapidly from turn to turn.
Trying to fix these problems just ended up making things more difficult. But I've come up with a new idea on how to interlink these sensibly which should result in much saner code that is also easier to adjust in response to feedback and allows for more AI adaptation to circumstances.