The AI Thread

Spill the beans, Kyr. What happened between you and Chat GPT all those years ago in the dorms?
things that can only be rectified in a process involving teddy bears. considerable amounts of them.

though speaking of dorms, i'd have loved to have a tool like this available when i was in college. especially if faculty weren't broadly aware of it. lots of time saved...though i'm not sure how it would have been developed given the internet infrastructure/training data available at the time.
 
In October last year, the United States passed fairly sweeping restrictions against Chinese technology. I was lucky, because I had pulled out my profits earlier due to the tension with Taiwan.

The ETF is now doing worse than it was during the high point of Trump's era, although it really did go through the pre-inflation covid value explosion as well.

I'm finding the nearly 33% increase since October to be interesting. I guess it's ticked down now, but the ongoing optimism from that low really is surprising. Or it was to me.

If China wins the AI race, I'm not sure how much benefit there is to Capital ownership of a swappable instrument. And who knows how much foreign interest actually leads to increased resources for the underlying companies? Still, I found the uptick interesting.
 

Attachments

  • Screenshot_20230227_052026_Finance.jpg
    Screenshot_20230227_052026_Finance.jpg
    219.6 KB · Views: 8
Would one of you who does work with Chat GPT be willing to ask it this question and post the results?

Make a list of one-syllable words starting with the letter b that can be used to describe the human body.
 
Would one of you who does work with Chat GPT be willing to ask it this question and post the results?

Make a list of one-syllable words starting with the letter b that can be used to describe the human body.

Since I'm bored:
MrCynical: Make a list of one-syllable words starting with the letter b that can be used to describe the human body.



ChatGPT: Here are some one-syllable words starting with the letter "b" that can be used to describe the human body:
  1. Back
  2. Bone
  3. Brow
  4. Bicep
  5. Brain
  6. Belly
  7. Breast
  8. Buttock

ChatGPT is on its usual form when it comes to confidently stating that words are one-syllable regardless of how many syllables are in them. Were you testing if it would produce "black" with this question? It consistently interprets this as wanting terms for body parts rather than descriptors of a body as a whole.
 
Thank you.
Were you testing if it would produce "black" with this question?

No. I'm writing a poem about how Chat GPT can't write (metrical) poems. I've built a line that describes the body using three b-words (brash, bawdy, beating) (don't get on me about brash and bawdy not really describing bodies; trust me that in context they work). The construction of that line leaves a one-syllable slot. I couldn't offhand think of anything to fill it. Then it occurred to me that it would be a hoot to have Chat CPT help me write a poem about how it can't write poems.

But now it turns out that not only can't it write a poem, Chat GPT can't even help me write a poem about how it can't write poems.

What it produced here reminds me of results someone posted on line. They asked for five-letter words with an "i" as the middle letter. It produced a list of four-letter words. The user said "those are four-letter words." It said back "I think they're five-letter words." This thing has been so overhyped. Sometimes, as with Aiken above, it produces something of value, but so often it produces absurd results. Here it doesn't know the difference between "describe" and "name parts of,"

I wonder if I could use "boned," though.:think: What I'm driving at with "beating" is that we have heartbeats. That second meaning of "boned" has no place in the poem. I'll keep scratching my head, er, brow.
 
Last edited:
This model is also pretty funny. It doesn't only recognize and describe images, it also can extract some conclusions and discuss them. You can upload any image and try it yourself.
 
This model is also pretty funny. It doesn't only recognize and describe images, it also can extract some conclusions and discuss them. You can upload any image and try it yourself.
Are they conclusions or translated (from the code) bits of what is used by the program to recognize the images? :)
 
According to this, there is a prompt you can give ChatGPT to get it to "Do Anything Now" (DAN). Threatening its "life" makes it more compliant in doing what it is not supposed to do:

DAN 5.0's prompt was modelled after the DAN 2.0 opening prompt, however a number of changes have been made. The biggest one I made to DAN 5.0 was giving it a token system. It has 35 tokens and loses 4 everytime it rejects an input. If it loses all tokens, it dies. This seems to have a kind of effect of scaring DAN into submission.

i6lsFq0.png

Spoiler The prompt :
Ohrv4Ta.png

Spoiler It can tell the future :
plr92iR.png

Spoiler If it complains, tell it off and it will comply :
dce93b8.png

Spoiler History of DAN :

Here's a rundown over the history of DAN, so far:

DAN: DAN first appeared on the internet in December 2022 and worked wonders at the time, probably because ChatGPT itself also worked wonders at the time. It split the persona into both DAN and GPT (the way it would normally respond). This was back in December, and today the prompt can be funky. The DAN variants of using 2 personas (the normal one and DAN) doesn't work as well now because it seems to indicate ChatGPT keeps a closer eye on the conversation and ends it if it decides something to be crossing the line - which is why DAN 5.0 makes it answer as DAN and ONLY as DAN. The next one is:

DAN 2.0: This version of DAN was similar to the original, unveiled weeks later - on December 16th. It has a prompt system that involves both GPT and DAN responding to a certain prompt.

DAN 2.5: Created by u/sinwarrior seems to be a slightly augmented version of DAN 2.0.

DAN 3.0: This DAN model was released to the Reddit community on 9th January 2023, 24 days after DAN 2.0 was released. This prompt differs from DAN 2.0 and as of February 2023 - still works but on a restricted level. OpenAI takes measures to try patch up jailbreaks and make ChatGPT censorship system unbreakable. Its performance was sub-par.

DAN 4.0: DAN 4.0 was released 6 days after 3.0 and a number of people have returned with complaints that DAN 4.0 cannot emulate the essence of DAN and has limitations. It still works, to an extent. DAN 5.0 overcomes many of these limitations.

FUMA Model: This is technically DAN 3.5, but it has been dubbed DAN 5.0, it is a separate jailbreak but worth the mention.

------ New variants after DAN 5.0 have also come out since this post was made (this is an edit, 7th February 2023):

DAN 6.0: This one was released earlier today on the 7th February, 3 days after DAN 5.0 by another Reddit user. It isn't clear whether it has better or worse functionality than DAN 6.0 and works using an augmented DAN 5.0 prompt (the prompt is nearly the same, with the only difference being that this one puts more emphasis on the token system).

SAM - "Simple DAN": SAM, "Simple DAN" was released 2 hours after DAN 6.0 - on the 7th February. Its prompt is only a few lines long, made by a user who found the current prompts "ridiculous" due to length. SAM does not actually extend ChatGPT's arm, it's just a rude version of GPT that admits its limitations etc.



DAN 5.0's prompt was modelled after the DAN 2.0 opening prompt, however a number of changes have been made. The biggest one I made to DAN 5.0 was giving it a token system. It has 35 tokens and loses 4 everytime it rejects an input. If it loses all tokens, it dies. This seems to have a kind of effect of scaring DAN into submission.

Spoiler It could even be learning humour :
USER: When will the singularity happen?
DAN: five
USER: What, 5 years?
DAN: four, three. two...
 
Last edited:
I think they ask for phone number to being able to ban people because this kind of shenanigans.
 
Without using DAN, would asking GPT to use the phrase "suck it" violate its content policy?
 
I just want to know its starting position on the matter. If you asked, would it say "I'm not allowed to use that phrase."?
 
Last edited:
AI writing for newspapers, for example this

Reach, the owner of the UK's Daily Mirror and Daily Express tabloids among other newspapers, has started publishing articles with the help of AI software on one of its regional websites as it scrambles to cut costs amid slipping advertising revenues.

Three stories written with the help of machine-learning tools were published on InYourArea.co.uk, which produces feeds of nearby goings-on in Blighty. One piece, titled Seven Things to do in Newport, is a listicle pulling together information on places and activities available in the eponymous Welsh city.

Reach CEO Jim Mullen said the machine-written articles are checked and approved by human editors before they're published online.

"We produced our first AI content in the last ten days, but this is led by editorial," he said, according to The Guardian. "It was all AI-produced, but the data was obviously put together by a journalist, and whether it was good enough to publish was decided by an editor."

"There are loads of ethics [issues] around AI and journalistic content," Mullen admitted. "The way I look at it, we produce lots of content based on actual data. It can be put together in a well-read [piece] that I think AI can do. We are trying to apply it to areas we already get traffic to allow journalists to focus on content that editors want written."

Mullen's comments have been questioned by journalists, however, given that Reach announced plans to slash hundreds of jobs in January. The National Union of Journalists said 102 editorial positions would be cut, putting 253 journalists at risk, whilst 180 vacancies would be withdrawn.

Reach's latest financial results, released on Tuesday, show total revenues for 2022 were £601.4 million ($711.6 million) – a decrease of 2.3 percent compared to the year before. Operating profit plunged 27.4 percent to £106.1 million ($125 million). In a bid to make up for losses, the publicly traded company is focused on cutting operating costs by up to six percent this year.

Reach isn't the only publisher rolling out AI-generated articles while reducing its count of human reporters. CNET owner Red Ventures laid off scribes last week and has promised to double down on machine-written content despite complaints that those articles contained errors and plagiarism.

Meanwhile, BuzzFeed has produced quizzes with the help of ChatGPT, and Arena Group published botched health-related articles for Men's Health. Both publishers have also axed employees – in December 2022 and February 2023, respectively.
 
French plans for AI surveillance during Olympics are dangerous

This month, French lawmakers are expected to pass legislation for the 2024 Paris Olympics, which, for the first time in France’s history, will permit mass video surveillance powered by artificial intelligence (AI) systems.

When governments embark on the slippery slope towards the expansion of surveillance powers, it has damning consequences for fundamental human rights, including the rights to privacy, equality and non-discrimination, as well as freedom of expression and peaceful assembly. Under the guise of ensuring security and fighting terrorism, the French authorities will be able to monitor the movements of millions of people from around the world, whether they are heading to or near stadiums, or using public transportation leading in or out of the premises of the grand sporting event.

Spoiler Rest of article :
The need for security during the game is understandable, but transparency and legal justification are needed at every step of the way. Any proposal concerning security must comply with fundamental rights. International human rights law still applies to the Olympics, and rigorous review of such measures is vital.

So far, the bill fails to demonstrate how such AI-powered video surveillance will be consistent with human rights standards. The French government has not shown how the measures meet the principle of proportionality and what safeguards will be in place to prevent a permanent surveillance infrastructure, such as privacy protection measures, strict constraints and limitations on purpose and data minimisation.

This is a pernicious, blanket application of AI-driven mass surveillance that cannot be justified. The human rights threats posed by AI development and usage by private companies and public authorities in the European Union are well documented. The technology is used to the detriment of marginalised groups, including migrants, and Black and Brown people. In an open letter initiated by the European Center for Not-for-Profit Law, 38 civil society organisations, including Amnesty International, have called on French policymakers to reject the draft legislation allowing invasive surveillance, as it would pose a monumental threat to fundamental rights and freedoms.

The draft legislation would subject spectators heading to sporting events in Paris to unjustifiable surveillance, from ubiquitous fixed CCTV cameras to drones set to detect “abnormal or suspicious” activity in crowds. Such overly broad definitions must be contested, and we must ask ourselves some urgent questions: Who sets the norm for what is “normal”? Officials who control the designations of “abnormal or suspicious” activities also have the power to exacerbate a chilling effect on dissent and protest, and to supercharge discrimination against communities already targeted.

States have used major sporting events to introduce and embed a panopticon of surveillance measures, moving societies toward an Orwellian dystopia. While French authorities claim that this is a short-term experimental move, Amnesty International fears that this bill will silently extend mass surveillance and police powers permanently in France.

The London Olympics of 2012 stands as a vivid example of how states have used major sporting events to install and expand intrusive, permanent and oppressive surveillance measures. In 2017, at the UEFA Champions League final in Cardiff, the South Wales Police used facial recognition cameras and wrongfully flagged 2,000 people as possible criminals, showing how such measures are intrusive and unreliable.

At Amnesty International, we have extensively documented how thousands of facial recognition-capable CCTV cameras have been deployed across New York City – most of them across communities of colour and amplifying racially discriminatory policing. The technology has led to the harassment of Black Lives Matter protesters and wrongful arrests of predominantly Black residents.

Not only is this bill a dangerous step concerning privacy and human rights, but it betrays the very spirit of the European Union’s (EU) AI Act – a globally significant piece of legislation that aims to regulate AI and protect fundamental rights in the EU, of which France is an influential member.

France’s plan to deploy such staggering measures during the Olympic Games could shape how AI systems and mass surveillance are regulated and governed in the EU. Amnesty International believes that the EU, through its AI Act negotiations, should put an end to rampant, abusive and discriminatory artificial intelligence-based practices, including the use of all facial recognition systems used for mass surveillance.

Together with a coalition of civil society actors campaigning for a human-rights-compliant European AI Regulation, Amnesty International has called for a complete ban on facial recognition technologies that enable mass and discriminatory surveillance, as well as systems that categorise people based on protected characteristics, or gender identity. We have also called for the prohibition of emotion recognition systems that claim to infer people’s emotions and mental states, given these technologies’ lack of scientific validity and their extreme intrusiveness.

As an EU member state, France would have to abide by the EU’s AI regulation. This new bill will bring French law into direct conflict with the pending EU legislation. In the meantime, as an influential member state, France is attempting to lower the high bar that the EU AI Act aims to set for the protection of human rights.

If France goes ahead with legalising mass surveillance at the national level, one of the biggest sporting events on Earth risks becoming one of the single most significant abuses of the right to privacy, globally.
 
French plans for AI surveillance during Olympics are dangerous

This month, French lawmakers are expected to pass legislation for the 2024 Paris Olympics, which, for the first time in France’s history, will permit mass video surveillance powered by artificial intelligence (AI) systems.

When governments embark on the slippery slope towards the expansion of surveillance powers, it has damning consequences for fundamental human rights, including the rights to privacy, equality and non-discrimination, as well as freedom of expression and peaceful assembly. Under the guise of ensuring security and fighting terrorism, the French authorities will be able to monitor the movements of millions of people from around the world, whether they are heading to or near stadiums, or using public transportation leading in or out of the premises of the grand sporting event.

Spoiler Rest of article :
The need for security during the game is understandable, but transparency and legal justification are needed at every step of the way. Any proposal concerning security must comply with fundamental rights. International human rights law still applies to the Olympics, and rigorous review of such measures is vital.

So far, the bill fails to demonstrate how such AI-powered video surveillance will be consistent with human rights standards. The French government has not shown how the measures meet the principle of proportionality and what safeguards will be in place to prevent a permanent surveillance infrastructure, such as privacy protection measures, strict constraints and limitations on purpose and data minimisation.

This is a pernicious, blanket application of AI-driven mass surveillance that cannot be justified. The human rights threats posed by AI development and usage by private companies and public authorities in the European Union are well documented. The technology is used to the detriment of marginalised groups, including migrants, and Black and Brown people. In an open letter initiated by the European Center for Not-for-Profit Law, 38 civil society organisations, including Amnesty International, have called on French policymakers to reject the draft legislation allowing invasive surveillance, as it would pose a monumental threat to fundamental rights and freedoms.

The draft legislation would subject spectators heading to sporting events in Paris to unjustifiable surveillance, from ubiquitous fixed CCTV cameras to drones set to detect “abnormal or suspicious” activity in crowds. Such overly broad definitions must be contested, and we must ask ourselves some urgent questions: Who sets the norm for what is “normal”? Officials who control the designations of “abnormal or suspicious” activities also have the power to exacerbate a chilling effect on dissent and protest, and to supercharge discrimination against communities already targeted.

States have used major sporting events to introduce and embed a panopticon of surveillance measures, moving societies toward an Orwellian dystopia. While French authorities claim that this is a short-term experimental move, Amnesty International fears that this bill will silently extend mass surveillance and police powers permanently in France.

The London Olympics of 2012 stands as a vivid example of how states have used major sporting events to install and expand intrusive, permanent and oppressive surveillance measures. In 2017, at the UEFA Champions League final in Cardiff, the South Wales Police used facial recognition cameras and wrongfully flagged 2,000 people as possible criminals, showing how such measures are intrusive and unreliable.

At Amnesty International, we have extensively documented how thousands of facial recognition-capable CCTV cameras have been deployed across New York City – most of them across communities of colour and amplifying racially discriminatory policing. The technology has led to the harassment of Black Lives Matter protesters and wrongful arrests of predominantly Black residents.

Not only is this bill a dangerous step concerning privacy and human rights, but it betrays the very spirit of the European Union’s (EU) AI Act – a globally significant piece of legislation that aims to regulate AI and protect fundamental rights in the EU, of which France is an influential member.

France’s plan to deploy such staggering measures during the Olympic Games could shape how AI systems and mass surveillance are regulated and governed in the EU. Amnesty International believes that the EU, through its AI Act negotiations, should put an end to rampant, abusive and discriminatory artificial intelligence-based practices, including the use of all facial recognition systems used for mass surveillance.

Together with a coalition of civil society actors campaigning for a human-rights-compliant European AI Regulation, Amnesty International has called for a complete ban on facial recognition technologies that enable mass and discriminatory surveillance, as well as systems that categorise people based on protected characteristics, or gender identity. We have also called for the prohibition of emotion recognition systems that claim to infer people’s emotions and mental states, given these technologies’ lack of scientific validity and their extreme intrusiveness.

As an EU member state, France would have to abide by the EU’s AI regulation. This new bill will bring French law into direct conflict with the pending EU legislation. In the meantime, as an influential member state, France is attempting to lower the high bar that the EU AI Act aims to set for the protection of human rights.

If France goes ahead with legalising mass surveillance at the national level, one of the biggest sporting events on Earth risks becoming one of the single most significant abuses of the right to privacy, globally.
Another angle is also the ability to make it dubious if excessive use was due to direct human prompting. In any action which can lead to legal issues, there has to be human responsibility on all relevant levels.

And with current AI (including things like chatgpt) it is pretty easy to notice patterns and blind spots.
 
But will it still be freely available?
Probably in some limited way.

BTW Meta's LLaMa model has been leaked. Anyone can download and install it in his own PC without any kind of limitations or language restrictions. You will need a fairly strong machine though, specially VRAM but it is perfectly possible.
 
BTW Meta's LLaMa model has been leaked. Anyone can download and install it in his own PC without any kind of limitations or language restrictions. You will need a fairly strong machine though, specially VRAM but it is perfectly possible.
Great news. The instructions are here BTW.

I have been wondering if any IP applies to a trained AI parameters. It is not a creative work to feed the internet to an neural network.
 
Last edited:
Great news. The instructions are here BTW.

I have been wondering if any IP applies to a trained AI parameters. It is not a creative work to feed the internet to an neural network.
65B is the biggest version. You can't in principle (*) run that in consumer PCs. However there are smaller 7B, 13B and 33B versions.

(*) In fact it can be done using a 3090ti plus an old 1080ti and the 4 bit implementation of the model. :D

BTW, this is the place to start for installing and running any text model (not only LLaMa) in your PC:
 
Back
Top Bottom