The AI Thread

Hmm...
Spoiler :
d7eEX33368.jpg
 
I've installed an open source text to video AI on my computer just yesterday.


Surely not as powerful as that online tools, but still impressive for a first iteration, and for text to images, after a short laps of time (~1 year), a good PC can now render pictures as well as online services. I wonder how fast it will be for videos...

"a white shark in a tank glass aquarium"

vidshark.gif


(you can easily guess on which databank they trained their model on :D )
 
I've installed an open source text to video AI on my computer just yesterday.


Surely not as powerful as that online tools, but still impressive for a first iteration, and for text to images, after a short laps of time (~1 year), a good PC can now render pictures as well as online services. I wonder how fast it will be for videos...

"a white shark in a tank glass aquarium"

View attachment 657342

(you can easily guess on which databank they trained their model on :D )
There is a way if you want to remove the watermark. Look for Aitrepeneur in Youtube.
 
Is google sucking for real or did they just not watch their flank and they’re about to crush it by 2024?
 
Google is sucking for real apparently. Bard is out and is destroyed by Bing in almost every possible way. Bing has walked a long way since a mediocre launch few weeks ago and has become pretty amazing now, a blindly fast evolution. Lets see if Google can do it too but right now Bard´s issues seems more serious than Bing´s issues at launch. Bard is prone to hallucination, cant add 9 +10, cant code, and creativity wise it is pretty lame.

 
Last edited:
AI Bots Are Policing Toxic Voice Chat in Videogames
BY SARAH E. NEEDLEMAN

In the videogame “Gun Raiders,” a player using voice chat could be muted within seconds after hurling a racial slur. The censor isn’t a human content moderator or fellow gamer—it is an artificial intelligence bot. Voice chat has been a popular part of videogaming for more than a decade, allowing players to socialize and strategize. According to a recent study, nearly three-quarters of those using the feature have experienced incidents such as name-calling, bullying and threats.
New AI-based software aims to reduce such harassment. Developers behind the tools say the technology is capable of understanding most of the context in voice conversations and can differentiate between playful and dangerous threats in voice chat.
If a player violates a game’s code of conduct, the tools can be set to automatically mute him or her in real time. The punishments can last as long as the developer chooses, typically a few minutes. The AI can also be programmed to ban a player from accessing a game after multiple offenses.

The major console makers— Microsoft Corp., Sony Group Corp. and Nintendo Co.—offer voice chat and have rules prohibiting hate speech, sexual harassment and other forms of misconduct. The same goes for Meta Platforms Inc.’s virtual-reality system Quest and

Discord Inc., which operates a communication platform used by many computer gamers. None monitor the talk in real time, and some say they are leery of AI-powered moderation in voice chat because of concerns about accuracy and customer privacy.
The technology is starting to get picked up by game makers.

Gun Raiders Entertainment Inc., the small Vancouver studio behind “Gun Raiders,” deployed AI software called ToxMod to help moderate players’ conversations during certain parts of the game after discovering more violations of its community guidelines than its staff previously thought. “We were surprised by how much the N-word was there,” said the company’s operating chief and co-founder, Justin Liebregts. His studio began testing ToxMod’s ability to accurately detect hate speech about eight months ago. Since then, the bad behavior has declined and the game is just as popular as it was before, Mr. Liebregts said, without providing specific data.

Traditionally, game companies have relied on players to report problems in voice chat, but many don’t bother and each one requires investigating.

ajax-request.php
zoom_in.png

‘Gun Raiders’ uses a bot that temporarily mutes players who violate the game’s code of conduct. GUN RAIDERS ENTERTAINMENT

Developers of the AI-monitoring technology say gaming companies may not know how much toxicity occurs in voice chat or that AI tools can identify and react to the problem in real time.
“Their jaw drops a little bit,” when they see the behaviors the software can catch, said Mike Pappas, chief executive and co-founder of Modulate, the Somerville, Mass., startup that makes ToxMod. “A literal statement we hear all the time is: ‘I knew it was bad. I didn’t know it was this bad.’ ”
 
The AI can also be programmed to ban a player from accessing a game after multiple offenses.
smells like lawsuits.

especially in cases where ai bans a player for "violations" the way it banned youtube channels talking about black chess pieces or something.
 
Google is sucking for real apparently. Bard is out and is destroyed by Bing in almost every possible way. Bing has walked a long way since a mediocre launch few weeks ago and has become pretty amazing now, a blindly fast evolution.

What launch are you talking about? The latest version? Bing is around for more than a decade now. I know time passes faster as we get older, but I wouldn't yet consider that a "few" weeks.
 
What launch are you talking about? The latest version? Bing is around for more than a decade now. I know time passes faster as we get older, but I wouldn't yet consider that a "few" weeks.

The Bing chatbot, not the Bing search engine
 
smells like lawsuits.

especially in cases where ai bans a player for "violations" the way it banned youtube channels talking about black chess pieces or something.
Smells like idiots.

Bad behavior has gotten people banned from games (both live sports events and online games) frequently. AI just changes the referee/moderator. Bad behavior is determined by the game/site rules. Why should a person be free to spew hateful language in a private game environment? They can go out into their neighborhood and do it there without fear. Under what circumstances is hate speech legal in a private venue?
 
AI just changes the referee/moderator. Bad behavior is determined by the game/site rules.
the problem is false positives. when chess stream/channel gets banned for "racism" when it's talking about black pieces, it's false. the channel is victimized by that. there is harm, and it's not hard to demonstrate that harm. when one party harms another for false reasons, that's what we call civil liability. costing someone revenue for no reason/based on false accusations will do that.

maybe in principle, the ai can get better at identifying context than humans. i'm not sure. however, under current usage it is certainly not capable of it.

Why should a person be free to spew hateful language in a private game environment? They can go out into their neighborhood and do it there without fear. Under what circumstances is hate speech legal in a private venue?
first of all, hate speech *is* legal in private venues. the answer to that question is "always". the question is then whether the venue will tolerate it, not its legality.

better question: in an environment where it is trivial to never hear from players you don't want to hear from...what is this ai moderation filter adding, other than chances to make a mistake and punish people for no reason? if i turn off chat in rocket league, they could be calling me literally anything, or assigning me homework to read the communist manifesto 5x or whatever. i'd never know. they're shouting into the void. i can do this at the press of a button, without any advanced ai, training data for/development of said ai, or impact on performance from some bullcrap monitoring the game environment constantly in real time. i have been able to do this in decently designed games since before a substantial percentage of the current online playerbase was born. when, precisely, did this become insufficient? i can hear from exactly who i want to hear from, and avoid hearing from those i do not. is there an explanation for why this is insufficient?

Bad behavior has gotten people banned from games (both live sports events and online games) frequently.
sometimes with merit (direct statement from player at venue), sometimes without merit (like that racecar driver getting penalized for what his dad said, what the heck). there was also that one olympian who wanted different immigration policy or something and got banned from the olympics for racism, even though it objectively was not.

another interesting one is that league of legends team who got severely penalized for looking up at the broadcast screen. in front of them.

the question always comes down to "who gets to decide", because that person or group of people will have control. at least when humans make the choice, they are open to criticism. instead of making an ai algorithm that's created by a person but then gets treated as a final moderator in the moment. all this for what, to prevent some babies from having their feelings hurt because they didn't want to mute chat? we're supposed to pay more, have worse performance, false positives, and use up development time for *that*? really?
 
I’ve asked GPT -3.5, 4, and Bard about macro/monetary economics and it pretty much takes a post-Keynesian position on macroeconomic questions most of the time.
 
I’ve asked GPT -3.5, 4, and Bard about macro/monetary economics and it pretty much takes a post-Keynesian position on macroeconomic questions most of the time.
To translate, it has, without admitting to it, an MMT perspective when asked technical questions.

But when pressed, will deny the association.
 
Back
Top Bottom