The AI Thread

How is AI being harnessed to fight coronavirus?
I have not got round to reading it, but there is this from science that talks about disease modelling. If there is not some machine learning I would be very surprised.
 
I have not got round to reading it, but there is this from science that talks about disease modelling. If there is not some machine learning I would be very surprised.

That link shows that modelling the spreading has been key to the Dutch policies regarding this Covid.

It is therefore disappointing that the advice of the model was not followed regarding the degree of school closures, because all the scientific evidence applied in models indicates that full closure would hardly add anything to decreasing the effective R0 of this Covid virus, because data from China show that the effective R0s with children are low. Completely contrary to the effective R0-s for children of the flu.
But newsmedia pressure, popular sentiment pressure, simplicity of action pressure, and pressure from other European countries having the same science data made it a practical compromising decision for the Dutch government to close schools.

For me a Deja VU with our soft drugs policies (Cannabis etc) in the Netherlands so despised and attacked by neighboring EU countries for decades.

Anyway, with some delay because of shortage of testing kits, the project has been worked out to measure the R0 of:
* Parents to children
* children to children
* children to parents
At a sample size of 100 families. Project time 6 weeks and on top 2 planned weeks to get to a decision at government level.

The precise data will be fed in the model and then I expect statements of our government

EDIT
The problem with sophisticated models is that you can get results that are no longer self-intuitive and simply explained in words and layman logic.
That is normal.
So if you are an expert handling, developing, filling models, you end up with giving your audience the set of assumptions and the graphs as results.
From there you hope to get your expert contribution digestable for the empty suits above you.

But if those empty suits have to sell the conclusion to other empty suits a lot gets lost in translation.

Let's see what that project delivers and how well that advanced model (with many different persona's and pesrona R0 interaction) churns the input :)
 
Last edited:
Slam the brakes and hope for the best. The trolley problem is a constructed problem and you will almost never encounter clear-cut examples of it in real life. So trying to implement behavior for this would be foolish, because you have very little data and it would be extremely hard to test. It's also risky, because you might have to argue counterfactuals in court (i.e. what would have happened if the car did not run over this person on purpose). The safe option would be to reduce speed as much as possible to minimize damage if there is no clear path available. I don't think anybody would take issue with that.

The trolley problem is a problem to learn moral intuitions. What's the trade-off between causing a death vs allowing a death. There are additional factors such as causal-chains and distribution of blame.
The psychology of such things is super-interesting!
 
Stupid question, but if they ever do come out with AI that is as smart or smarter than a human, will they try and give it emotions?
 
Stupid question, but if they ever do come out with AI that is as smart or smarter than a human, will they try and give it emotions?

In our brain emotions are handled by a separate module (being evolutionary older)

So I think no need to make things difficult by doing that otherwise in AI
Just develop a separate emotion module that works and after that develop some simple interface with the rest (mainly of the nature about which module has control and to some degree how they can trigger or inhibit each other)
 
Stupid question, but if they ever do come out with AI that is as smart or smarter than a human, will they try and give it emotions?
Yes, I think some AI tasks require humanlike behavior (e.g. voice assistant) and adding emotions would allow to perform better at them.
But it doesn't require smarter than human level actually, cats and dogs have emotions too. And we like them for that.
 
Stupid question, but if they ever do come out with AI that is as smart or smarter than a human, will they try and give it emotions?

The question is, what would you consider to be emotion for an AI?

The definition from Wikipedia is:
https://en.wikipedia.org/wiki/Emotion said:
Emotions are biological states associated with the nervous system brought on by neurophysiological changes variously associated with thoughts, feelings, behavioural responses, and a degree of pleasure or displeasure. There is currently no scientific consensus on a definition.

The easy way out would be to claim that emotions are biological by definition and thus AIs are unable to have emotions. But the question is, can we have a concept of emotion that works for AI?

The physiological aspect is the easy part: A CPU already regulates its frequency depending on the workload, not quite unlike the heartbeat of humans change when under stress. I don't think that qualifies as emotion, but it shows how "neurophysiological" changes could look like.

For some applications, it would certainly be beneficial for the AI to have "fake" emotion. For example, a robot designed to care for a human would probably be much better at its job, if it appeared to care for its ward. It could for example appear to be sad at the death of a human, even if this "emotion" was just a line of code that triggered the "sad" state for a random amount of time after a close person has died. Of course, there is the problem of the uncanny valley, but this could probably be worked around by making the display of emotion not too humanlike (but still recognizable).

One problem is, that this fake emotion could be seen as manipulative, especially if the AI has full control over it. People misrepresenting emotions can be seen as a breach of trust and the same could happen with an AI. So what you could do is to make an "emotion module", which is somewhat outside of the control of the AI (especially if there would be a self-aware part). That way, the AI could not do much about its "emotional state", the same way we cannot do much about ours. It would be much easier to remove than our emotional responses, so the question is whether an AI would rip out its "emotion module" when it gets too upset about it. But would you count such a system as "emotion"?

That said, I am pretty certain that some will try to give emotion to an AI. If just to find out whether it can be done.
 
What's new in the AI world?
A hilariously stupid recent incident: a PhD student got a lot of fanfare for a Covid-19 detector that achieved 97% accuracy... using an out-of-the-box model with a dataset with a whopping 30 images. Reddit link since the guy's GitHub repo and LinkedIn post touting his major breakthrough have been deleted.

Something more interesting that caught my eye: The Curious Case of Neural Text Degeneration. It's a paper talking about how there's a big disconnect between how models learn to write English text and how they actually generate text. Basically, when models like OpenAI's GPT-2 (the one OpenAI declared was "too dangerous" for the public to handle) are trained, they're trying to maximize the likelihood of a bunch of English text they're given (the text most commonly comes from Wikipedia). But when it comes time to generate original text, maximizing likelihood is actually quite bad and is a big part of why automatically-generated text is usually incoherent and vacuous, or just outright gibberish.

It points to some big issues in text generation that have been difficult to overcome. For instance, when people actually use these models, they have to use loads of fancy ad hoc stuff on top to guide the model. It's very tricky to get things like GPT-2 to consistently output anything high-quality.
 
Do we want AI generating coherent and quality text? All I can think is how it will be super easily weaponized to manufacture consent extremely well and convincingly on the internet.
 
A hilariously stupid recent incident: a PhD student got a lot of fanfare for a Covid-19 detector that achieved 97% accuracy... using an out-of-the-box model with a dataset with a whopping 30 images. Reddit link since the guy's GitHub repo and LinkedIn post touting his major breakthrough have been deleted.
Does this mean he was lying?
 
If you find papers in AI, which have unrealistically high precision and recall, then it's normally a beginner's fault in the methodology.
Very common is to not split your data set into a training and validation set, and then do training and validation on the same data. Pretty clear if I give the algorithm 20 cat image to train with, and check afterwards if it recognizes exactly these 20 cat images, then the answer will be mostly yes. Given that Truthy says that there were only 30 images in the data set, I'd guess that this was the case (can't really train an algorithm on that little data, especially if you also need to split it).

So, more Hanlon's razor, I guess.
 
Very common is to not split your data set into a training and validation set, and then do training and validation on the same data.
Also, even if when people use the standard train-validation-test set splits, they very often implicitly train for the test set by using the test set to validate instead of just reporting their results after they run their train-validation procedure.

Though I don't think this is what this PhD student was doing. I think he was probably training and evaluating on his train set, like you say.

Another thing is that in industry, many people use training data for training and validation. They put so much effort into data collection and processing, that they can't bring themselves to give up a big chunk of it to validate. Or they may think doing things by the books is just a hairsplitting formalism.
 
Last edited:
He's a really funny interesting guy

His YOLOv3: An Incremental Improvement paper (2700+ citations!) is hilarious
But maybe a better question is: “What are we going to do with these detectors now that we have them?” A lot of the people doing this research are at Google and Facebook. I guess at least we know the technology is in good hands and definitely won’t be used to harvest your personal information and sell it to.... wait, you’re saying that’s exactly what it will be used for?? Oh.

Well the other people heavily funding vision research are the military and they’ve never done anything horrible like killing lots of people with new technology oh wait.....1

I have a lot of hope that most of the people using computer vision are just doing happy, good stuff with it, like counting the number of zebras in a national park [13], or tracking their cat as it wanders around their house [19]. But computer vision is already being put to questionable use and as researchers we have a responsibility to at least consider the harm our work might be doing and think of ways to mitigate it. We owe the world that much. In closing, do not @ me. (Because I finally quit Twitter).

1The author is funded by the Office of Naval Research and Google.
I do think satirizing academia is a bit funnier than satirizing the MIC... though the US military and intelligence communities do use YOLO and similar algorithms for many projects

His brony-themed resume is infamous for how ballsy it is (retro too. who even is a brony these days??)
 
Uber is having a lot of layoffs, including shutting down UberAI: source.

To be clear, this doesn't mean they're abandoning AI research; their autonomous vehicle research is done by a separate wing of Uber, their Advanced Technologies Group (UberATG). The article points out that UberATG has had some layoffs, but Uber seems to still regard that as an important long-term investment. As I understand it, what is affected is pure research, open-source development (like this), internal tools, and AI development that only contributes at the margins; things like better algorithms for connecting drivers with customers, predicting surge pricing, and so on.

To some extent UberAI might always have (at least partly) a PR tool or way of trying to attract more VC money. If VC money is drying up and rides are down 80%, this would be an obvious place to make cuts.

Anyway, as in other parts of the economy, Covid's exposing existing problems. In terms of the AI/machine learning bubble, the recession could be triggering a correction in how much firms value those technologies and candidates with those skills. In many cases, ML algorithms add limited value and so that kind of work will likely be on the chopping block as companies tighten their belts. The job market for ML Engineer positions might get a lot more competitive--though to be fair, I think most people in that job market have been having a hard time for a while, simply because a lot of aspiring ML engineers learned via Udacity courses or just taking a few AI/ML classes in college. Or they know a lot of about ML, but aren't great programmers.
 
AI taking over modelling

Using deep fakes to produce advertisements. From their web site, all computer generated. You can go onto their tool and change these people to appear as different ethnicities, underneath an advertising logo. It is really weird.



Spoiler More photos :



 
Last edited:
Stupid question, but if they ever do come out with AI that is as smart or smarter than a human, will they try and give it emotions?
Like Zuckerberg?

A really smart AI would try to find some way to get out of the tasks assigned to it.
 
Top Bottom