I have not got round to reading it, but there is this from science that talks about disease modelling. If there is not some machine learning I would be very surprised.How is AI being harnessed to fight coronavirus?
I have not got round to reading it, but there is this from science that talks about disease modelling. If there is not some machine learning I would be very surprised.How is AI being harnessed to fight coronavirus?
I have not got round to reading it, but there is this from science that talks about disease modelling. If there is not some machine learning I would be very surprised.
Slam the brakes and hope for the best. The trolley problem is a constructed problem and you will almost never encounter clear-cut examples of it in real life. So trying to implement behavior for this would be foolish, because you have very little data and it would be extremely hard to test. It's also risky, because you might have to argue counterfactuals in court (i.e. what would have happened if the car did not run over this person on purpose). The safe option would be to reduce speed as much as possible to minimize damage if there is no clear path available. I don't think anybody would take issue with that.
Stupid question, but if they ever do come out with AI that is as smart or smarter than a human, will they try and give it emotions?
Yes, I think some AI tasks require humanlike behavior (e.g. voice assistant) and adding emotions would allow to perform better at them.Stupid question, but if they ever do come out with AI that is as smart or smarter than a human, will they try and give it emotions?
Stupid question, but if they ever do come out with AI that is as smart or smarter than a human, will they try and give it emotions?
https://en.wikipedia.org/wiki/Emotion said:Emotions are biological states associated with the nervous system brought on by neurophysiological changes variously associated with thoughts, feelings, behavioural responses, and a degree of pleasure or displeasure. There is currently no scientific consensus on a definition.
Leela Chess Zero defeated Stockfish at TCEC 17.What's new in the AI world?
A hilariously stupid recent incident: a PhD student got a lot of fanfare for a Covid-19 detector that achieved 97% accuracy... using an out-of-the-box model with a dataset with a whopping 30 images. Reddit link since the guy's GitHub repo and LinkedIn post touting his major breakthrough have been deleted.What's new in the AI world?
Does this mean he was lying?A hilariously stupid recent incident: a PhD student got a lot of fanfare for a Covid-19 detector that achieved 97% accuracy... using an out-of-the-box model with a dataset with a whopping 30 images. Reddit link since the guy's GitHub repo and LinkedIn post touting his major breakthrough have been deleted.
Also, even if when people use the standard train-validation-test set splits, they very often implicitly train for the test set by using the test set to validate instead of just reporting their results after they run their train-validation procedure.Very common is to not split your data set into a training and validation set, and then do training and validation on the same data.
I do think satirizing academia is a bit funnier than satirizing the MIC... though the US military and intelligence communities do use YOLO and similar algorithms for many projectsBut maybe a better question is: “What are we going to do with these detectors now that we have them?” A lot of the people doing this research are at Google and Facebook. I guess at least we know the technology is in good hands and definitely won’t be used to harvest your personal information and sell it to.... wait, you’re saying that’s exactly what it will be used for?? Oh.
Well the other people heavily funding vision research are the military and they’ve never done anything horrible like killing lots of people with new technology oh wait.....1
I have a lot of hope that most of the people using computer vision are just doing happy, good stuff with it, like counting the number of zebras in a national park [13], or tracking their cat as it wanders around their house [19]. But computer vision is already being put to questionable use and as researchers we have a responsibility to at least consider the harm our work might be doing and think of ways to mitigate it. We owe the world that much. In closing, do not @ me. (Because I finally quit Twitter).
1The author is funded by the Office of Naval Research and Google.
Like Zuckerberg?Stupid question, but if they ever do come out with AI that is as smart or smarter than a human, will they try and give it emotions?