Are there any situations where the right answer to "I want this bit of accurate information" is "Ask a generative AI large language model"?
There are none, but that does not stop people.
This is very likely an accurate statement. I'm not saying there aren't any, but voting districts are not generally commensurate with ZIP codes.
It could conceivably be misleading information, but to avoid that, one would positively have to program the chatbots to correct the human user: "You said zip code, but you're asking about voting; did you mean voting district? Here are the precincts within the 19121 zip code."
Google does that if one misspells things, so it could conceivably be done. But the programmers would have to take on themselves to anticipate such errors.
LLMs are not programmed in the traditional sense. The underlying software is, of course, but the output depends on what the model has learned from the training data. It gets the start of a conversation and then tries to predict how it will end. So it is very tricky to get it to answer questions factually correct. You can of course fine tune it for election-related questions, but only those you can think of.