Yes, that's how I would feel after seeing
https://nethackchallenge.com/report.html#best-overall-agent. Symbolic may be better than neural. My measuring stick is what difficulty can it play at. And can an AI play against Deity.
But you can combine, like wikipedia says. Hybrid AI may be the best possible, which is unsurprising.
With Reinforcement Learning, you can use neural networks. That gets you "Deep Reinforcement Learning".
Or, take a rule-based AI and use machine learning to make decisions:
https://pythonprogramming.net/starcraft-ii-ai-python-sc2-tutorial/
With Cv4MiniEngine, there is a way to create a "learning environment" to create these sorts of AIs. Or rather, it shows you how to use the DLL (except that annoying end-turn problem...) and the DLL has performance enhancements. One would basically create a headless engine, or, a server, that a modern Python 3 AI with all your OpenCV/neural network magic could connect to.
Or, go the extra mile and just reimplement whole DLL for maximum performance (with no python calls at all). After all, you'd probably want lots and lots of training data.
Another kind of AI that I keep thinking about is solving for a near-optimal isolated empire. In theory, you could use an MILP or constraint solver (with a python interface), but even optimising for a single city may take forever. Another way is "Monte Carlo tree search", which Total War used, but I would want near-optimality, which would still take forever. The big problem with this kind of thing is huge amounts of symmetry (near-equivalent search-trees for each choice of tiles to work) and poor objective bounds (can't tell which sub-tree is better). If you use an off-the-self solver, the thing to try out may be HiGHS or MiniZinc.
*Another idea is that a sufficiently good AI could be used as hints for the player, like you would have in chess games.