The AI Thread

I think the edge detection comes in the retina, and it is the mid and high level features that are modelled in the visual cortex. Much of this stuff comes from analysis of human vision.
Yeah, I read that some initial processing can be done in retina, too.
The interesting part is that neural networks were not specifically designed to learn edge detection. They learn it only because it helps to do the task they were being trained to do.
 
Yeah, I read that some initial processing can be done in retina, too.
The interesting part is that neural networks were not specifically designed to learn edge detection. They learn it only because it helps to do the task they were being trained to do.
Well, they are built with a convolutional layer at the resolution of edge detection, and one of the features it recognises at that resolution is edges.
 
In those machines, wouldn't the lower levels be acting as the higher levels, unlike with humans? Any reason to have (say) the edge detection be less accessible to the actual machine?

To have meaningful levels, you need something which actually moves from level to level, not something that is on all levels at once (alternatively: any level is the same to it). A human is consciously at some levels, unconsciously at more (or all), while a machine simply has no such positioning.
 
Well, data is being processed in machine level by level too.
As for consciousness, it is likely the result of activity of more or less same neurons as those in "lower" levels of our brain. My understanding is that there is nothing special about it. We are trained to survive in our environment similarly like artificial neural networks are trained to do tasks like image recognition. Of course million years of evolution is very different process than training of neural network, but in the end both processes are optimization for some task. We developed consciousness because it's beneficial for our survival, somewhat like neural networks develop ability to detect edges because it helps them to recognize images. Our brains have obviously different and much more complex architecture than current networks, and our mind is "run" on wetware instead of hardware, but I'd say there are no direct and obvious clues that our brain must be fundamentally different from the machine.
 
If you put a photo of yourself on fecesbook or twitter (or just about anywhere else online without security) then Clearview AI have given it to Saudi Arabia

Internal documents revealed by BuzzFeed News show that Clearview offered its technology to law enforcement agencies, governments, and academic institutions in 24 countries, including the UK, Brazil, and Saudi Arabia, on a try-before-you-buy basis.

The facial-recognition biz scraped billions of photos from public social media profiles, including Instagram and Facebook, and put them all into a massive database. Clearview's customers can submit pictures of people and the system will automatically try to locate those people in the database, using facial recognition, and return any details picked up from their personal pages if successful. Thus, the police can, for example, give the service a CCTV camera still of someone, and if it matches a face in the database, the system will report back their information, such as their name, social media handles, and so on.
Illinois is fighting back

Illinois law is quite tough on collecting data for biometric applications, including facial recognition. The state's Biometric Information Privacy Act (BIPA) requires companies to obtain written consent from people to collect and store data that can be used to identification purposes.

The ACLU sued Clearview in May 2020, claiming it had violated BIPA. Clearview tried to get the case thrown out by saying its business practices were protected under the First Amendment. But an Illinois court didn't agree; Judge Pamela Meyerson dismissed [PDF] the startup's claims and the lawsuit will go ahead.​
 
TBH not surprised they're based in NYC instead of SF
 
We gonna get smushed on that one Samson.
 
Molecular level parallel AI

The details are a bit beyond me, but this could be really big if it could be scaled up. I do not know how to compare the processing density of this to more conventional measures such as involved in Moore's law, but "a lot" is appropriate.

Decision trees within a molecular memristor
Multiple redox transitions in a molecular memristor can be harnessed as ‘decision trees’ to undertake complex and reconfigurable logic operations in a single time step.

To advance the performance of logic circuits, we are re-imagining fundamental electronic circuit elements by expressing complex logic in nanometre-scale material properties. Here we use voltage-driven conditional logic interconnectivity among five distinct molecular redox states of a metal–organic complex to embed a ‘thicket’ of decision trees (composed of multiple if-then-else conditional statements) having 71 nodes within a single memristor.

Using simple circuits of only these elements, we experimentally demonstrate dynamically reconfigurable, commutative and non-commutative stateful logic in multivariable decision trees that execute in a single time step and can, for example, be applied as local intelligence in edge computing.
lQJJT9m.png
 
AI invents 4 new materials (for batteries or something)

Chemists have discovered four new materials based on ideas generated from a neural network, according to research published in Nature.

Rosseinsky and his colleagues turned to a neural network made up of nine layers and over 50,000 parameters. The team fed the software examples of known solid state materials from the Inorganic Crystal Structure Database, a dataset containing at least 200,000 inorganic compounds.

The neural network shuffles the combinations of chemicals to generate new ones for scientists to study. These outputs are ranked by the software on how likely they are to produce materials that are novel and possible to produce in a lab.

Four materials with elements highlighted by this model have already been synthesised in the laboratory.

The four materials crafted from hundreds of possible outputs by the AI model are a family of crystalline solids that conduct lithium atoms, according to the academics; the team believes they could be useful in batteries for electric cars one day.
Write up Paper Code

From the comments, Skynet (military communications) and Cyberdyne systems (cybernetics) already exist, so no need to worry there :shifty:.
 
Ask Jamie, Artificial, but not very intelligent:

A chatbot used by Singapore's Ministry of Health (MOH) has been switched off after providing inappropriate answers to residents' queries on COVID-related matters.
screnshot_ask_jamie_chatbot_fail.jpg
 

(From comments section)
Taliban: Captures Afghanistan.
Everyone: Where's the UN?
UN:
 
Robot artist Ai-Da released by Egyptian border guards
A British-built robot that uses cameras and a robotic arm to create abstract art has been released after Egyptian authorities detained it at customs.

https://www.bbc.com/news/world-us-canada-58993682
 
Neural networks overtake humans in Gran Turismo racing game

Driving a racing car requires a tremendous amount of skill (even in a computer game). Now, artificial intelligence has challenged the idea that this skill is exclusive to humans — and it might even change the way automated vehicles are designed.

As complex as the handling limits of a car can be, they are well described by physics, and it therefore stands to reason that they could be calculated or learnt. Indeed, the automated Audi TTS, Shelley, was capable of generating lap times comparable to those of a champion amateur driver by using a simple model of physics. By contrast, GT Sophy (the computer game version) doesn’t make explicit calculations based on physics. Instead, it learns through a neural-network model. However, given the track and vehicle motion information available to Shelley and GT Sophy, it isn’t too surprising that GT Sophy can put in a fast lap with enough training data.

What really stands out is GT Sophy’s performance against human drivers in a head-to-head competition. Far from using a lap-time advantage to outlast opponents, GT Sophy simply outraces them. Through the training process, GT Sophy learnt to take different lines through the corners in response to different conditions. In one case, two human drivers attempted to block the preferred path of two GT Sophy cars, yet the AI succeeded in finding two different trajectories that overcame this block and allowed the AI’s cars to pass (Fig. 1).

GT Sophy also proved to be capable of executing a classic manoeuvre on a simulation of a famous straight of the Circuit de la Sarthe, the track of the car race 24 Hours of Le Mans. The move involves quickly driving out of the wake of the vehicle ahead to increase the drag on the lead car in a bid to overtake it. GT Sophy learnt this trick through training, on the basis of many examples of this exact scenario — although the same could be said for every human racing-car driver capable of this feat. Outracing human drivers so skilfully in a head-to-head competition represents a landmark achievement for AI.

The implications of Wurman and colleagues’ work go well beyond video-game supremacy. As companies work to perfect fully automated vehicles that can deliver goods or passengers, there is an ongoing debate as to how much of the software should use neural networks and how much should be based on physics alone. In general, the neural network is the undisputed champion when it comes to perceiving and identifying objects in the surrounding environment. However, trajectory planning has remained the province of physics and optimization. Even vehicle manufacturer Tesla, which uses neural networks as the core of autonomous driving, has revealed that its neural networks feed into an optimization-based trajectory planner (see go.nature.com/3kgkpua). But GT Sophy’s success on the track suggests that neural networks might one day have a larger role in the software of automated vehicles than they do today.
d41586-022-00304-2_20106388.png

Figure 1 | Neural-network drivers outperform human players. Wurman et al. report a neural-network algorithm — called GT Sophy — that is capable of winning against the best human players of the video game Gran Turismo. When two human drivers attempted to block the preferred path of two GT Sophy cars, the algorithm found two ways to overtake them.

Writeup Paper
 
^The human players messed up by not occupying both lanes :/ The game would have been over.
Greed-> the computer wins.
Are you sure? I think that being side by side would compromise apex speed enough that they would have been caught on the straight.
 
AI generated faces are indistinguishable from real ones, and are more trustworthy

Humans can no longer reliably tell the difference between a real human face and an image of a face generated by artificial intelligence, according to a pair of researchers.

Two boffins – Sophie Nightingale from the Department of Psychology at the UK's Lancaster University and Hany Farid from Berkley's Electrical Engineering and Computer Sciences Department in California – studied human evaluations of both real photographs and AI-synthesized images, leading them to conclude nobody can reliably tell the difference anymore.

In one part of the study – published in the Proceedings of the National Academy of Sciences USA – humans identified fake images on just 48.2 per cent of occasions.

In another part of the study, participants were given some training and feedback to help them spot the fakes. While that cohort did spot real humans 59 per cent of the time, their results plateaued at that point.

The third part of the study saw participants rate the faces as "trustworthy" on a scale of one to seven. Fake faces were rated as more trustworthy than the real ones.

"A smiling face is more likely to be rated as trustworthy, but 65.5 per cent of our real faces and 58.8 per cent of synthetic faces are smiling, so facial expression alone cannot explain why synthetic faces are rated as more trustworthy," wrote the researchers.​

F1.large.jpg

The most (Top and Upper Middle) and least (Bottom and Lower Middle) accurately classified real (R) and synthetic (S) faces. White faces were the least accurately classified, and male White faces were even less accurately classified than female White ones.
F3.large.jpg

The four most (Top) and four least (Bottom) trustworthy faces and their trustworthy rating on a scale of 1 (very untrustworthy) to 7 (very trustworthy). Synthetic faces (S) are, on average, more trustworthy than real faces (R).

Writeup Paper
 
It's the same StyleGAN2, old technology from 2020.
People can be trained to recognize fakes better - look for messy hair, different earrings, weird backgrounds, etc.
At least until StyleGAN3 is released.
 
Back
Top Bottom