I mean average time in particular game from total number of owners, sorry for incorrect wording. If we have to guess number of owners from number of simultaneous players, we need to use this metric.
It could be roughly calculated from GameSpy by multiplying number of average hours per last 2 weeks with percentage of active players and dividing by 14 (since the data is for 2 weeks). You'll see that clearly some games have more than 2 hours among active players, but among all players even the most trending games struggle way below 1 hour.
(Well, except for Fish Idle, but I believe for some reason it encourages people to leave this game running, because its numbers are totally ridiculous).
Once again, sorry, I sometimes wrote from my phone, so not enough details
Both of this are not something new. People who run opinion polls know how to handle those differences. And people who analyze product reviews know how to handle them (and it's not by ignoring the sample being not representative).
First of all, wish for participating in opinion review itself doesn't show any bias yet. So, the fluctuation from random sample is really small and is mostly covered by increasing the audience, as long as the question are right (that's another big area of expertise). Wish for leaving review, on the other hand does include strong bias.
In the end it all falls to the point of how to use metric. If those reviews are meant to claim that Civ7 initial reception is worse than Civ6, they provide more than enough proof, even considering biased sample and other platforms. If we try to use those metrics for some very indirect things, like predicting number of DLC buyers, it falls apart. And bias is not the strongest reason why.