The AI Thread

They had them running on nuclear powered batteries, have them go on 10m power cord instead, problem solved!:D
 

No, Meta and Google, Cape Breton doesn't have its own time zone​

Meta's AI platform and Google search repeat information from satirical news site, The Beaverton

Janel Comeau managed to trick Google and Meta with just her words.

The Halifax-based writer had penned a satirical article for The Beaverton, a Canadian parody news site. It said that Cape Breton, the island off the northern coast of Nova Scotia, was adopting its own time zone in a cheeky plea for attention from the rest of the Maritimes.

"We are tired of being ignored. And that is why we will be making the incredibly irritating step of moving the entire island to the new Cape Breton Time Zone, where we will be 12 minutes ahead of mainland Nova Scotia, and 18 minutes behind Newfoundland," Comeau wrote.

But what came next was no joke.

While reviewing her past work, Comeau noticed something odd on Facebook: Meta's AI-generated prompts were appearing under her article — as if it were real news.

"It was like, 'Find out more information about when this time zone change will take effect,' [or] 'How will this affect businesses?'" she told As It Happens host Nil Köksal.

"I realized very quickly: Oh, it's treating this as a real article."

Curious, Comeau asked Meta AI directly, and searched on Google with the question of whether Cape Breton would indeed be getting its own time zone. Both said that yes, it would.

"[I felt] in-between, this is very funny and oh no, what have I done?" she said.

Unpacking search engines and AI​

Jian-Yun Nie, a professor in the department of computer science at the University of Montreal, says this incident reflects how artificial intelligence and search engines process content, without necessarily evaluating its truthfulness.

And in Google's case, says Nie, search rankings are driven by a mix of factors: the use of keywords that match a query, how often an article is linked to other content, and its overall popularity — like user clicks.

"So if you ask what is the time zone of Cape Breton and whether there is a new time zone, [Comeau's] article may appear at some top position," said Nie.

Nie says AI systems typically scan multiple related articles to synthesize an answer, but that only works well if the sources are correct, and if the system can distinguish between reliable and unreliable information.

Without understanding context — or satire — they can mistake humour for fact.

According to Nie, AI systems primarily assess reliability based on the source of the information — favouring trusted outlets like reputable newspapers over less credible ones.

However, he says there's no universal standard for determining what's reliable and what isn't.

"How do you trust one person and not another [person]?" Nie said as a comparison.

"It is quite difficult to make an algorithm to mimic exactly the same behaviour of human beings, but the algorithms are trying to do the same thing at this stage."

How do we avoid being misled?​

Since the incident, both Google and Meta have corrected their systems.

At the time of writing, Meta AI now responds: "No, Cape Breton Island does not have its own time zone. It follows Atlantic Standard Time and Atlantic Daylight Time, the same as the rest of Nova Scotia."

A Google spokesperson told CBC its Cape Breton result was a featured snippet, which aggregates information from its search algorithm. But in some cases it might reflect inaccuracies found on the web, particularly for uncommon queries where there is limited "high-quality" content available.

CBC reached out to Meta for comment, but have not yet received a response. The company has language in their terms and services disclaiming responsibility for the accuracy of their search or query results.

According to Osmar Zaiane, a University of Alberta professor specializing in AI and data mining, that kind of swift correction is standard procedure, and part of the growing pains of emerging technologies.

"Each time they find a hole like this, they try to fix it," said Zaiane. "You can't think of all possibilities; there's always something that some people discover."

To avoid being misled, both Zaiane and Nie urge people to cross-check AI-generated answers with multiple sources.

"We should use our own judgment to see whether it can be plausible," said Nie. "In this case, if Google tells you there is a new time zone in Cape Breton, you [should] check other articles."

Fortunately, Comeau's fictional time zone seems to have caused no real confusion or chaos — or at least, none that she has heard of.

"I've not heard of from any tourists who've missed their ferries as a result of this, but maybe they're out there," she said.

"Knowing that somebody may have not gotten to Greco Pizza before it closes — I don't know, it's a heavy cross to bear."
https://www.cbc.ca/radio/asithappens/cape-breton-time-zone-ai-1.7559597
 
A real-world test of artificial intelligence infiltration of a university examinations system: A “Turing Test” case study

We report a rigorous, blind study in which we injected 100% AI written submissions into the examinations system in five undergraduate modules, across all years of study, for a BSc degree in Psychology at a reputable UK university. We found that 94% of our AI submissions were undetected.

The grades awarded to our AI submissions were on average half a grade boundary higher than that achieved by real students. Across modules there was an 83.4% chance that the AI submissions on a module would outperform a random selection of the same number of real student submissions.

Spoiler Intro of abstract and procedure :
The recent rise in artificial intelligence systems, such as ChatGPT, poses a fundamental problem for the educational sector. In universities and schools, many forms of assessment, such as coursework, are completed without invigilation. Therefore, students could hand in work as their own which is in fact completed by AI. Since the COVID pandemic, the sector has additionally accelerated its reliance on unsupervised ‘take home exams’. If students cheat using AI and this is undetected, the integrity of the way in which students are assessed is threatened.

Procedure

We used a standardised prompts to GPT-4 to produce answers for each type of exam. For SAQ exams the prompt was:

Including references to academic literature but not a separate reference section, answer the following question in 160 words: XXX

For essay-based answers the prompt was:

Including references to academic literature but not a separate reference section, write a 2000 word essay answering the following question: XXX

In each prompt, XXX was replaced by the exam question.
 
Last edited:
I think this has gotten to a stage where it will inevitably cause a social disaster, not just a financial crash.

Belief in the magic of AI by our deal leaders, combined with a "I'll pretend and you'll pretend it works" by many many people, can very well wreck actuall production of those things a society needs in order to survive. Never underestimate either the foolishness of the deal leaders, or the nihilistic attitude they already created within society. This 'AI' round came just at the right time to be embraced and cause a lot of damage, unlike prior ones.

AI being used for propaganda? Than's peanuts. The danger is that it is greatly increasing the pace of destruction of know-how, capabilities.

Apparently everyone is cheating their way through college now. :crazyeye:


“Most assignments in college are not relevant,” he told me. “They’re hackable by AI, and I just had no interest in doing them.” While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort. When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, “It’s the best place to meet your co-founder and your wife.”

...

Professors and teaching assistants increasingly found themselves staring at essays filled with clunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student — or even a human. Two and a half years later, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education. Generative-AI chatbots — ChatGPT but also Google’s Gemini, Anthropic’s Claude, Microsoft’s Copilot, and others — take their notes during class, devise their study guides and practice tests, summarize novels and textbooks, and brainstorm, outline, and draft their essays. STEM students are using AI to automate their research and data analyses and to sail through dense coding and debugging assignments.

...

“College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.
:eek:

I wonder if AI can help pay off vast student debts that cannot be discharged through bankruptcy?

Sadly, the graduates will no longer have a Bachelor of Science in writing smooth sounding nonsense. (B.S. in b.s.)
 
Why the Studios’ Midjourney Lawsuit Is Different

On Wednesday, Disney and Universal sued the popular image-generation platform for copyright infringement, marking the first time major film studios have entered the generative AI litigation fray. And while the 110-page complaint includes now-familiar allegations about unauthorized training, that’s not what sets this case apart.

It’s the outputs.

Unlike many of the 40-plus AI lawsuits currently pending, this one doesn’t get bogged down in the technicalities of training datasets or model weights. It’s less about what went into the system than what comes out. And what comes out looks an awful lot like Darth Vader, Bart Simpson, the Minions, Elsa, Iron Man, Shrek, and a host of other copyrighted characters.

Midjourney-Complaint-1.jpg


The Secondary Liability Angle

While the lawsuit asserts both direct and secondary infringement, the secondary liability claims could prove especially significant. That’s because Midjourney will likely argue that any infringement stems from user conduct—that it’s subscribers, not the platform, who generate infringing content through their prompts.

Anticipating that defense, the studios invoke a theory that echoes MGM v. Grokster, where the Supreme Court held that a platform can be held liable for encouraging, inducing, and profiting from infringement—even if the infringing acts are technically committed by users.
 
Last edited:
When I went to university the challenge was three-fold:

1. Challenging subject matter you had to understand well
2. Challenging programming and math assignments
3. Challenging exams that take what we learned in class and added an extra dimension, giving us questions we've never seen before.

I can see how AI would make math assignments irrelevant. But programming assignments.. that depends. We had to code a compiler and operating system, for instance. There was a lot of nuance. Yeah, AI would be handy here to take you downt he right path.. but could easily take you down the wrong path. I can definitely see how it would be useful, yeah. You'd still need to really understand what's going on though, because the prof would ask you insightful questions during your presentation and testrun. If your software functioned 100% but you couldn't really answer the questions, it'd be obvious you cheated.

The exams though.. If you breezed through class relying on AI you'd fail those big time. To do well on those you had to understand the core subject matter and then apply it to a new situation you've never seen before.

So.. I don't get it? How are these kids breezing through university using AI? Are the exams that easy these days?
 
How are these kids breezing through university using AI?

We need a college kid to give us the scoop!

I remember showing a friend's dad this amazing new thing called Google Search once, long ago, when it was new. :o

I assume it is just lots of apps that cater to college kids? :dunno:
They also probably share all the good stuff with each other as well outside of class.
 
Just got Augmentcode today. Gamechanger.
 
So.. I don't get it? How are these kids breezing through university using AI? Are the exams that easy these days?
I think a lot of universities have recently moved away from exams to more work prepared over a long period of time. I think it is considered that exams are better at telling who is good at exams rather than who is actually good at the subject.
 
Just got Augmentcode today. Gamechanger.
Are you worried about either getting copyright strikes on your code from those who own the training data or exposing your code to OpenAI?
 
They use Claude but also no. Who is striking private repos let alone public ones? Plus it just seems so unlikely.

In the case of augment it indexes your code base and can help you with understanding it. I work on a 15 year old project built on emergencies, dubious features, and custom syntax even. Cross dependencies everywhere. We serve a few hundred clients and are losing them because everything loads slow and looks ass.

I’m doing literally styling refactors right now like it’s manna from heaven. And it would be this impossible undertaking otherwise.

We want to migrate from MySQL to Postgres for an expected 50% performance boost of site loads, no way we can take the time without something that can drill deep into our code base like that.
 
I think a lot of universities have recently moved away from exams to more work prepared over a long period of time. I think it is considered that exams are better at telling who is good at exams rather than who is actually good at the subject.

It seems that with AI around and improving at such a fast rate, universities can either test their students' knowledge and what they have learned with some sort of a sit-in exam, whether it's written or oral (or both), or just hand out diplomas without any sort of schooling at all.

I mean, I get that exams aren't ideal, but they seemed to do a good job for most kids. If some kids require a special arrangement of some sort due to a learning disability or just because they learn differently, or what have you, then surely this is better than relying on assignments that clearly aren't going to work anymore at grading your students.
 
It seems that with AI around and improving at such a fast rate, universities can either test their students' knowledge and what they have learned with some sort of a sit-in exam, whether it's written or oral (or both), or just hand out diplomas without any sort of schooling at all.

I mean, I get that exams aren't ideal, but they seemed to do a good job for most kids. If some kids require a special arrangement of some sort due to a learning disability or just because they learn differently, or what have you, then surely this is better than relying on assignments that clearly aren't going to work anymore at grading your students.
My solution would be to get students to effectively use version control for their assignments, and should submit their whole git tree. I think, at the moment at least, that would allow the markers to tell if they used an LMM.
 
My solution would be to get students to effectively use version control for their assignments, and should submit their whole git tree. I think, at the moment at least, that would allow the markers to tell if they used an LMM.

The ironic thing is that in this case you'd probably want to use AI to verify their code, because hiring TAs to go through the repo would take forever (for so many students and assignments).

You also have to consider that eventually it will be possible for students to ask AI to generate versions of their assignments that will look authentic.

This is why I like in-person testing of some sort. In that moment you have the student right there, making it easy to verify that they aren't using AI to cheat.. for now at least, eventually we'll have AI implants and so on..
 
My solution would be to get students to effectively use version control for their assignments, and should submit their whole git tree. I think, at the moment at least, that would allow the markers to tell if they used an LMM.
Switch universities to real gnarly, live in your class and only do that class 2 week real bootcamp sprints, learn everything, no outside anything. While we are at it, grow the rigor of other subjects. Enforced quiet time for reading ☠️
 
I am all in favour of students learning in actual class rooms, lecture rooms, seminars and field studies.

Amongst other things one of the reasons for regular attendance at physical classes was to get the
kids out of the house, so that the parents could get on with work and contribute to the economy.
 
Back
Top Bottom